
On 7 September, 1976, dozens of children, including every single pupil from class 3, Scarfolk High School, vanished on their way to school. A police operation was launched but no clues were ever found. The children were pronounced dead the following Monday, a mere three days later.
Every year thereafter, the police commissioned their sketch artist to draw, in the style of a school photograph, how the missing children might have looked (albeit with their faces removed) had they not disappeared in mysterious circumstances. This was sent to the bereaved parents of class 3 at an exorbitant cost of £31.25.
In the 1979 class sketch, one parent noticed a small label on one of the faceless figure's clothes that contained a code word only their child could have known.
Under mounting pressure from parents, the police eventually raided their artist's studio and found 347 children in his cellar where many had been held captive for several years. The police immediately seized and confined the children as evidence in a crime investigation, which, after much dithering, ultimately never went to court leaving the families no choice but to pursue a private prosecution against the kidnapper.
As the children had already been pronounced dead and the cost of amending the relevant paperwork was high, they were given away as prizes in the Scarfolk police raffle, which helped pay the legal fees of their sketch artist, who, it turns out, was the son of Scarfolk's police commissioner.
RESTful APIs are everywhere. This is funny, because how many people really know what "RESTful" is supposed to mean?
I think most of us can empathize with this Hacker News poster:
I've read several articles about REST, even a bit of the original paper. But I still have quite a vague idea about what it is. I'm beginning to think that nobody knows, that it's simply a very poorly defined concept.
I had planned to write a blog post exploring how REST came to be such a dominant paradigm for communication across the internet. I started my research by reading Roy Fielding's 2000 dissertation, which introduced REST to the world. After reading Fielding's dissertation, I realized that the much more interesting story here is how Fielding's ideas came to be so widely misunderstood.
Many more people know that Fielding's dissertation is where REST came from than have read the dissertation (fair enough), so misconceptions about what the dissertation actually contains are pervasive.
The biggest of these misconceptions is that the dissertation directly addresses the problem of building APIs. I had always assumed, as I imagine many people do, that REST was intended from the get-go as an architectural model for web APIs built on top of HTTP. I thought perhaps that there had been some chaotic experimental period where people were building APIs on top of HTTP all wrong, and then Fielding came along and presented REST as the sane way to do things. But the timeline doesn't make sense here: APIs for web services, in the sense that we know them today, weren't a thing until a few years after Fielding published his dissertation.
Fielding's dissertation (titled "Architectural Styles and the Design of Network-based Software Architectures") is not about how to build APIs on top of HTTP but rather about HTTP itself. Fielding contributed to the HTTP/1.0 specification and co-authored the HTTP/1.1 specification, which was published in 1999. He was interested in the architectural lessons that could be drawn from the design of the HTTP protocol; his dissertation presents REST as a distillation of the architectural principles that guided the standardization process for HTTP/1.1. Fielding used these principles to make decisions about which proposals to incorporate into HTTP/1.1. For example, he rejected a proposal to batch requests using new MGET and MHEAD methods because he felt the proposal violated the constraints prescribed by REST, especially the constraint that messages in a REST system should be easy to proxy and cache.1 So HTTP/1.1 was instead designed around persistent connections over which multiple HTTP requests can be sent. (Fielding also felt that cookies are not RESTful because they add state to what should be a stateless system, but their usage was already entrenched.2) REST, for Fielding, was not a guide to building HTTP-based systems but a guide to extending HTTP.
This isn't to say that Fielding doesn't think REST could be used to build other systems. It's just that he assumes these other systems will also be "distributed hypermedia systems." This is another misconception people have about REST: that it is a general architecture you can use for any kind of networked application. But you could sum up the part of the dissertation where Fielding introduces REST as, essentially, "Listen, we just designed HTTP, so if you also find yourself designing a distributed hypermedia system you should use this cool architecture we worked out called REST to make things easier." It's not obvious why Fielding thinks anyone would ever attempt to build such a thing given that the web already exists; perhaps in 2000 it seemed like there was room for more than one distributed hypermedia system in the world. Anyway, Fielding makes clear that REST is intended as a solution for the scalability and consistency problems that arise when trying to connect hypermedia across the internet, not as an architectural model for distributed applications in general.
We remember Fielding's dissertation now as the dissertation that introduced REST, but really the dissertation is about how much one-size-fits-all software architectures suck, and how you can better pick a software architecture appropriate for your needs. Only a single chapter of the dissertation is devoted to REST itself; much of the word count is spent on a taxonomy of alternative architectural styles3 that one could use for networked applications. Among these is the Pipe-and-Filter (PF) style, inspired by Unix pipes, along with various refinements of the Client-Server style (CS), such as Layered-Client-Server (LCS), Client-Cache-Stateless-Server (C$SS), and Layered-Client-Cache-Stateless-Server (LC$SS). The acronyms get unwieldy but Fielding's point is that you can mix and match constraints imposed by existing styles to derive new styles. REST gets derived this way and could instead have been called—but for obvious reasons was not—Uniform-Layered-Code-on-Demand-Client-Cache-Stateless-Server (ULCODC$SS). Fielding establishes this taxonomy to emphasize that different constraints are appropriate for different applications and that this last group of constraints were the ones he felt worked best for HTTP.
This is the deep, deep irony of REST's ubiquity today. REST gets blindly used for all sorts of networked applications now, but Fielding originally offered REST as an illustration of how to derive a software architecture tailored to an individual application's particular needs.
I struggle to understand how this happened, because Fielding is so explicit about the pitfalls of not letting form follow function. He warns, almost at the very beginning of the dissertation, that "design-by-buzzword is a common occurrence" brought on by a failure to properly appreciate software architecture.4 He picks up this theme again several pages later:
Some architectural styles are often portrayed as "silver bullet" solutions for all forms of software. However, a good designer should select a style that matches the needs of a particular problem being solved.5
REST itself is an especially poor "silver bullet" solution, because, as Fielding later points out, it incorporates trade-offs that may not be appropriate unless you are building a distributed hypermedia application:
REST is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.6
Fielding came up with REST because the web posed a thorny problem of "anarchic scalability," by which Fielding means the need to connect documents in a performant way across organizational and national boundaries. The constraints that REST imposes were carefully chosen to solve this anarchic scalability problem. Web service APIs that are public-facing have to deal with a similar problem, so one can see why REST is relevant there. Yet today it would not be at all surprising to find that an engineering team has built a backend using REST even though the backend only talks to clients that the engineering team has full control over. We have all become the architect in this Monty Python sketch, who designs an apartment building in the style of a slaughterhouse because slaughterhouses are the only thing he has experience building. (Fielding uses a line from this sketch as an epigraph for his dissertation: "Excuse me… did you say 'knives'?")
So, given that Fielding's dissertation was all about avoiding silver bullet software architectures, how did REST become a de facto standard for web services of every kind?
My theory is that, in the mid-2000s, the people who were sick of SOAP and wanted to do something else needed their own four-letter acronym.
I'm only half-joking here. SOAP, or the Simple Object Access Protocol, is a verbose and complicated protocol that you cannot use without first understanding a bunch of interrelated XML specifications. Early web services offered APIs based on SOAP, but, as more and more APIs started being offered in the mid-2000s, software developers burned by SOAP's complexity migrated away en masse.
Among this crowd, SOAP inspired contempt. Ruby-on-Rails dropped SOAP support in 2007, leading to this emblematic comment from Rails creator David Heinemeier Hansson: "We feel that SOAP is overly complicated. It's been taken over by the enterprise people, and when that happens, usually nothing good comes of it."7 The "enterprise people" wanted everything to be formally specified, but the get-shit-done crowd saw that as a waste of time.
If the get-shit-done crowd wasn't going to use SOAP, they still needed some standard way of doing things. Since everyone was using HTTP, and since everyone would keep using HTTP at least as a transport layer because of all the proxying and caching support, the simplest possible thing to do was just rely on HTTP's existing semantics. So that's what they did. They could have called their approach Fuck It, Overload HTTP (FIOH), and that would have been an accurate name, as anyone who has ever tried to decide what HTTP status code to return for a business logic error can attest. But that would have seemed recklessly blasé next to all the formal specification work that went into SOAP.
Luckily, there was this dissertation out there, written by a co-author of the HTTP/1.1 specification, that had something vaguely to do with extending HTTP and could offer FIOH a veneer of academic respectability. So REST was appropriated to give cover for what was really just FIOH.
I'm not saying that this is exactly how things happened, or that there was an actual conspiracy among irreverent startup types to misappropriate REST, but this story helps me understand how REST became a model for web service APIs when Fielding's dissertation isn't about web service APIs at all. Adopting REST's constraints makes some sense, especially for public-facing APIs that do cross organizational boundaries and thus benefit from REST's "uniform interface." That link must have been the kernel of why REST first got mentioned in connection with building APIs on the web. But imagining a separate approach called "FIOH," that borrowed the "REST" name partly just for marketing reasons, helps me account for the many disparities between what today we know as RESTful APIs and the REST architectural style that Fielding originally described.
REST purists often complain, for example, that so-called REST APIs aren't actually REST APIs because they do not use Hypermedia as The Engine of Application State (HATEOAS). Fielding himself has made this criticism. According to him, a real REST API is supposed to allow you to navigate all its endpoints from a base endpoint by following links. If you think that people are actually out there trying to build REST APIs, then this is a glaring omission—HATEOAS really is fundamental to Fielding's original conception of REST, especially considering that the "state transfer" in "Representational State Transfer" refers to navigating a state machine using hyperlinks between resources (and not, as many people seem to believe, to transferring resource state over the wire).8 But if you imagine that everyone is just building FIOH APIs and advertising them, with a nudge and a wink, as REST APIs, or slightly more honestly as "RESTful" APIs, then of course HATEOAS is unimportant.
Similarly, you might be surprised to know that there is nothing in Fielding's dissertation about which HTTP verb should map to which CRUD action, even though software developers like to argue endlessly about whether using PUT or PATCH to update a resource is more RESTful. Having a standard mapping of HTTP verbs to CRUD actions is a useful thing, but this standard mapping is part of FIOH and not part of REST.
This is why, rather than saying that nobody understands REST, we should just think of the term "REST" as having been misappropriated. The modern notion of a REST API has historical links to Fielding's REST architecture, but really the two things are separate. The historical link is good to keep in mind as a guide for when to build a RESTful API. Does your API cross organizational and national boundaries the same way that HTTP needs to? Then building a RESTful API with a predictable, uniform interface might be the right approach. If not, it's good to remember that Fielding favored having form follow function. Maybe something like GraphQL or even just JSON-RPC would be a better fit for what you are trying to accomplish.
If you enjoyed this post, more like it come out every four weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.
Previously on TwoBitHistory…
New post is up! I wrote about how to solve differential equations using an analog computer from the '30s mostly made out of gears. As a bonus there's even some stuff in here about how to aim very large artillery pieces.https://t.co/fwswXymgZa
— TwoBitHistory (@TwoBitHistory) April 6, 2020
-
Roy Fielding. "Architectural Styles and the Design of Network-based Software Architectures," 128. 2000. University of California, Irvine, PhD Dissertation, accessed June 28, 2020, https://www.sciencedirect.com/science/article/pii/S0315086011000279. ↩
-
Fielding, 130. ↩
-
Fielding distinguishes between software architectures and software architecture "styles." REST is an architectural style that has an instantiation in the architecture of HTTP. ↩
-
Fielding, 2. ↩
-
Fielding, 15. ↩
-
Fielding, 82. ↩
-
Paul Krill. "Ruby on Rails 2.0 released for Web Apps," InfoWorld. Dec 7, 2007, accessed June 28, 2020, https://www.infoworld.com/article/2648925/ruby-on-rails-2-0-released-for-web-apps.html ↩
-
Fielding, 109. ↩
The pubs will reopen in a few days. Every day this week we will post a 1970s beer mat from the Scarfolk council archives. Collect them all!







When neither druid nor doctor could reverse the process, the victim became a town mascot, offering rides to children. Records show, however, that he was also secretly employed by the state to violently intimidate seditious citizens and prying outsiders. He was known among council staff as 'The Bouncer'.
The as-yet unsolved Steamroller Murders of Spring 1979, when dozens of people were discovered crushed flat with every bone in their bodies broken, were almost certainly a result of The Bouncer's handiwork.

Add caption

No one is entirely sure what the purpose of this public information poster was. All we know is that when a council worker accidentally posted it on billboards around Scarfolk, the poster below was quickly pasted over it.

Records show that the errant, anonymous worker was soon sold to another council where his job was either to feed the council pets or be fed to the council pets. Documents don't clarify which.

Contents
• 2019: An Introduction - Donna Scott
• The Anxiety Gene - Rhiannon Grist
• The Land of Grunts and Squeaks - Chris Beckett
• For Your Own Good - Ian Whates
• Neom - Lavie Tidhar
• Once You Start - Mike Morgan
• For the Wicked, Only Weeds Will Grow - G. V. Anderson
• Fat Man in the Bardo - Ken MacLeod
• Cyberstar - Val Nolan
• The Little People - Una McCormack
• The Loimaa Protocol - Robert Bagnall
• The Adaptation Point - Kate Macdonald
• The Final Ascent - Ian Creasey
• A Lady of Ganymede, a Sparrow of Io - Dafydd McKimm
• Snapshots - Leo X. Robertson
• Witch of the Weave - Henry Szabranski
• Parasite Art - David Tallerman
• Galena - Liam Hogan
• Ab Initio - Susan Boulton
• Ghosts - Emma Levin
• Concerning the Deprivation of Sleep - Tim Major
• Every Little Star - Fiona Moore
• The Minus-Four Sequence - Andrew Wallace
My short story 'Fat Man in the Bardo', originally published in Shoreline of Infinity 14, and I'm well chuffed to see it here.
(TOC layout copy and pasted from the redoubtable Lavie Tidhar, who as you can see also has a story in it.)
At the end of April, the retail consultant Mary Portas appeared on the BBC’s World at One programme to discuss how physical shopping could continue to function during the coronavirus crisis.
Portas has a bit of form for, shall we say, car-centric ‘solutions’ to high street problems, proposing the quack remedy of free parking as a response to town centre decline, and generally arguing for unfettered access by motor traffic to shopping streets, while simultaneously paying scant attention to benign modes of transport like walking and cycling. So it was perhaps no great surprise to hear her complaining about having to pay car parking charges in London boroughs during the coronavirus pandemic, while singing the praises of department stores that have converted themselves into drive-throughs, a kind of transformation that these hidebound councils are apparently not enlightened enough to adopt.
Here's 'High Streets expert' Mary Portas advocating "luxury drive throughs" (yes, really) for retail, while moaning about current on-street parking charges. There really is going to be a battle for how our streets feel and look in the coming weeks and months pic.twitter.com/IEaugNqqrf
— Mark Treasure (@AsEasyAsRiding) April 28, 2020
I was reminded of this episode by this excellent cartoon from Dave Walker, which manages to capture the Dystopian reality of the Portas worldview in the left panel.
As he so often does, @davewalker has managed to get a very important message, clearly, into a cartoon. Decision Time. pic.twitter.com/jcXda8gjvX
— John Dales
The idea of a society of entirely voluntary arrangements has its charms, but we don't live in one and are not likely to for quite some time. Until that happy day, public services should be funded out of taxation, rather than having to scrounge off the generosity of the public. In emergencies, however, we should pitch in. That's how I square my conscience with making donations, anyway. And if the inadequate supply of PPE to healthcare workers isn't an emergency, I don't know what is. So I was happy to contribute a story to an anthology of SF, fantasy and horror conceived and edited by Ian Whates at NewCon Press, and compiled and published with breathtaking speed. At a quarter of a million words from some of the leading names in the field, a paperback version would be an epoch-making brick that cost a significant chunk of cash. Electronic and weightless, Stories of Hope and Wonder is a steal at £5.99 / $7.99. Every penny of the proceeds goes straight to providing PPE and other support to UK healthcare workers. A significant amount, I understand, has already been raised and donated. More is needed.
They can't wait. Buy it now.
The core of this book describes working conditions in Bakkavor's food processing factories in West London, then moves on to describe how a Tesco distribution centre operates. The opening 100 plus pages are used to set the scene, then there is the central 180 pages, finally after a curious detour into 3D printer manufacture - and leaving aside an appendix - the last 50 pages deal with the question of revolutionary organisation. Cut into the descriptions of contemporary labour and class exploitation is much useful analysis and historical material:
The food and drink industry is the UK's largest manufacturing sector, accounting for 17% of the total UK manufacturing turnover, contributing £28.2bn to the economy annually and employing 400,000 people. And while a lot of fruit and veg is imported, the shelf life of freshly prepared products (FPP) means that outsourcing this work overseas is not possible. All the FPP found in the chilled section of our supermarkets comes from UK factories. Page 136.
People in Britain buy around 3.5 million ready-meals a day, which easily makes it the leading ready-meals market in Europe. Working hours are some of the longest in Europe, which perhaps explains the demand. Page 139.
Bakkavor is one of the biggest UK food companies you've never heard of. You've probably got a Bakkavor food item in your fridge, but you wouldn't know it because their name won't be on the packaging. They employ around 17,000 people across various sites in the UK and source 5,000 products from around the world to supply the largest supermarkets with their own-brand products - from salads, to desserts, to ready-meals and pizzas. Pages 147/148.
Bakkavor has an ageing workforce, the majority in the 55-64 age bracket. The next biggest age group was workers aged between 45-54, fewer again in the 35-44 age range. I think this was a huge factor in the docility of the workforce in general, even when the union was ramping up its activity. There was an aversion to risk, a palpable fear of going on strike, and a resignation that only comes with living a hard life with few victories. That isn't to say there weren't some older workers who were up for the fight. Page 155.
A toxic culture of disrespect pervaded the factories… All the stress and bad vibes understandably had a negative impact on peoples' mental and physical health. One guy dropped down dead in the smoking area. Another guy, a night shift hygiene worker, died in his late forties. A mild-mannered Polish guy from the maintenance department had a psychotic episode and climbed onto the roof, sobbing in front of his workmates. A young office worker who everybody ignored even killed himself. Others had strokes and panic attacks and were taken away by the ambulance, which came with depressing regularity. It wasn't just that they were old or smoked, although of course those were factors. I think it was also the type of work and toxic culture that drove people to their limits. Page 178.
The poor working conditions at Bakkavor, bad pay and struggles to improve it - alongside the unhygienic methods of food production - are described in detail. The switches from more objective analysis to an utterly subjective position and speculative assertion are sudden and frequent. Some might see this as a weakness but it is actually the book's strength. It's a rhetorical device designed to give those who haven't done these jobs a feeling of insight into them and a sense of empathy with those depicted in the book. Likewise if you have been employed in the industries described you might be drawn to a conscious embrace of the book's wider analytical perspective in part due to a sense of identification with the text's more subjective turns. Even even those who have not worked in these industries - or on some other factory floor - will recognise the social relations depicted from shops, offices and other places of employment.
In short Class Power On Zero Hours is worth reading for its central sections about food production and distribution. The opening and closing parts of the book may resonate with some but were less than thrilling to me. I found the initial section about west London especially tedious and almost gave up when I read the following sentence on the first full page:
Nobody on the London left had even heard of Greenford, not surprising due to its status as a cultural desert, in zone four on the Central line. Page 7.
I don't know - and don't care - if I'd count as part of what Angry Workers configure as the London left but I'd not only heard of Greenford, until lockdown I was going through it once once a month on my way to an extended training session the martial arts club I belong to has in South Ruislip. Likewise, I have two friends - one born in the same south-west London hospital as me - who work for Ealing council (pest control and a desk job); for those who don't know, Greenford is part of the borough of Ealing. While I passed through rather than went to Greenford and Park Royal growing up, I spent plenty of time back then in Hounslow which isn't so far away.
Ultimately the claim that 'nobody' was familiar with Greenford reveals Angry Workers' contact with the working class across much of London when its members first arrived here to have been rather limited. Other things they say point to the same conclusion. On the basis of what the collective writes it would seem that many of those they hung out with in London before moving to the city's west were students who'd come here to take university courses and who saw themselves as on the left but were clueless about about the place they'd relocated to. The text makes it clear Angry Workers went to great efforts to connect with the working class in west London, but leaves the impression they are still disconnected from it in other parts of the city.
The assertion that Greenford has cultural desert status appears obnoxious, racist and anti-working class: clearly not positions Angry Workers would want to be associated with even if what's quoted above might be (mis)read as linking them to views of this type. Bourgeois distaste for proletarian culture - sometimes expressed with the absurd assertion that the working class don't have a culture and exists in a 'cultural desert' - can be found among parts of what Angry Workers seem to be describing as the London 'left'. What 'the left' is and whether 'liberal' elements who want to transform everyone into a bourgeois subject are part of it might be seen by some as open to debate, although not by me. In odd places Class Power On Zero Hours lacks clarity in its verbal formulations but on the basis of the entire text, a generous guess would be it is the views of reactionaries who wish to demean working class immigrant communities that are being invoked in the statement about Greenford's cultural desert status rather than the Angry Workers collective itself believing this to be the case. That said, anyone who was born in the west or south-west of London or who has spent much time there can safely skip the early parts of this book. It is uneven but there is more than enough in its main section to make it worthwhile reading if you're consciously engaged in class struggle: or even if you're not, yet!
Finally, I really liked the solid pink inside covers of the book, so much so that I'm almost tempted to overlook the fact that this publication really cries out for an index. I'm unlikely to read the whole book twice but it would have been helpful to be able to find the parts I'm going to want to access again easily with an index.
Views: 15
Everyone, I hope you and yours are safe and well during this unprecedented pandemic.
As I write this, various governments are rushing to implement — or have already implemented — a wide range of different smartphone apps purporting to be for public health COVID-19 “contact tracing” purposes.
The landscape of these is changing literally hour by hour, but I want to emphasize MOST STRONGLY that all of these apps are not created equal, and that I urge you not to install various of these unless you are required to by law — which can indeed be the case in countries such as China and Poland, just to name two examples.
Without getting into deep technical details here, there are basically two kinds of these contact tracing apps. The first is apps that send your location or other contact-related data to centralized servers (whether the data being sent is claimed to be “anonymous” or not). Regardless of promised data security and professed limitations on government access to and use of such data, I do not recommend voluntarily choosing to install and/or use these apps under any circumstances.
The other category of contact tracing apps uses local phone storage and never sends your data to centralized servers. This is by far the safer category in which resides the recently announced Apple-Google Bluetooth contact tracing API, being adopted in some countries (including now in Germany, which just announced that due to privacy concerns it has changed course from its original plan of using centralized servers).
In general, installing and using local storage contact tracing apps presents a vastly less problematic and risky situation compared with centralized server apps.
Even if you personally have 100% faith that your own government will “do no wrong” with centralized server contact tracing apps — either now or in the future under different leadership — keep in mind that many other persons in your country may not be as naive as you are, and will likely refuse to install and/or use centralized server contact tracing apps unless forced to do so by authorities.
Very large-scale acceptance and use of any contact tracing apps are necessary for them to be effective for genuine pandemic-related public health purposes. If enough people won’t use them, they are essentially worthless for their purported purposes.
As I have previously noted, various governments around the world are salivating at the prospect of making mass surveillance via smartphones part of the so-called “new normal” — with genuine public health considerations as secondary goals at best.
We must all work together to bring the COVID-19 disaster to an end. But we must not permit this tragic situation to hand carte blanche permissions to governments to create and sustain ongoing privacy nightmares in the process.
Stay well, all.
–Lauren–
The Lord Mayor is not to be confused with the much better known but politically insignificant Mayor of London. Freemasons were cunning like that, they installed themselves in an office few knew anything about but which had loads of power and money as well as a rigged election, while leaving millions of Londoner's to democratically put their cross against a dupe with a similar title and lots of visibility but little power. This meant that their World King AKA head of the Court of Aldermen would be left alone to plot in secret. When Boris told me this he also said he hoped people would not recall that he had once been Mayor of London but never Lord Mayor.
Even Johnson's close friend and fellow believer in the bleach cure Donald Trump had been ridiculed for confusing the Mayor of London and the Lord Mayor of London. Those colonials in New York and Washington might only have one mayor but mighty London had two! Boris confided to me he assumed most people were ignorant of the fact that he had been born in New York and wasn't really Lord Mayor material. He hoped no one would suspect he was anything but a true blue Britisher when he called heartily for his favoured brew of Watneys Red Barrel, a beer that had been initially tested on the public at the East Sheen Lawn Tennis Club in south west London. This was close to where John Dee had his home in the sixteenth century, explorer Richard Burton had his tomb and the 1970s punk rock band Subway Sect hailed from.
To tell the truth it wasn't just a desire to get stupid fresh with Catherine MacGuiness - and the multi-billion City's Cash sovereign wealth fund she jointly controlled with the Lord Mayor - that led Boris to turn down ordination into the Guildhall Lodge. He was also concerned that once he was buck naked and dressed in nothing but a blindfold during his initiation, he might be subjected to some indignity he wouldn't have stood still for if he'd been able to see what was going on. Not that there hadn't been lots of perversion when Boris had been a member of the Bullingdon Club at Oxford.
At the Bullingdon they'd hired prostitutes to perform sex acts for them, and then there'd been the time Boris had got so drunk that… Well he'd been so drunk he wasn't sure whether or not he'd taken a fresh corpse borrowed form the local morgue on a date to an expensive restaurant as a dare….. Returning to things that put Boris off becoming a fully paid up freemason, there was also the issue of what had happened to both his Ottoman great-grandfather and Roberto Calvi. Although he was not related to the latter, Calvi's death had been much closer to home. The body of God's banker, a top Italian freemason, had been found ritually strung up under Blackfriars Bridge. This was roughly half-way between Britain's Parliament where Boris was Prime Minister and the City of London's Guildhall HQ - where Lodge 3116 met without so much as having to pay to hire a room. Boris had a public image of being a powerful man but he wanted the keys to power that were actually held in the Guildhall. The City of London council got to send a Remembrancer to sit in the House of Commons and tell the government what the City thought of what it was doing, The arrangement wasn't reciprocal.
Returning to Ali Kema, he\d been assassinated during the Turkish War of Independence. Historians claimed Kema was bumped off for being a traitor to Mustafa Kemal Ataturk's cause but Johnson knew that he'd been killed so that the Turkish state could lay its hands on his great-grandfather's stock of red mercury. Boris had been told this by the freemasons who had engineered his rise through the ranks of British politics in order to repay a debt their grandparents owed to Kema. Once Johnson's friend Donald Trump had blown their plan to use a bleach cure to rid the world of Covid 19 - by revealing it prematurely and thus having it ridiculed by the press - it seemed like his best bet for dealing with the virus was to lay his hands on his great-grandfather's stock of red mercury. As every alchemist knows, red mercury is a super rare substance that will cure cancer or boils or almost any other ailment, so why not coronavirus too? The problem was getting hold of the red mercury. When Boris phoned the Turkish Embassy in London to ask for it they told him he was an Islamophobic asshole who'd betrayed his Ottoman heritage. Ingrates!
In the meantime Boris had been passed 7,500 ring donuts that a food bank in his South Ruislip constituency had been unable to distribute to the needy and which would pass their use by date in a matter of hours. Some food processing plant in Greenford was donating what they couldn't sell, and there'd been a huge decline in demand for donuts since a rumour had gone around that eating them while talking on your smartphone caused Cover 19. More than 40 branches of Derek's Donuts had been set ablaze in the past two weeks and hundreds of supermarket workers whose stores sold the snack had been abused and threatened. Of course Boris had got all of his cabinet members to denounce as idiots those who claimed eating donuts caused Covid 19, and he'd brought in some top scientists too whose secret research proved the same thing. None of which stopped the anti-donut activists from promoting the conspiracy theory and denouncing his favourite delicacy as junk food for cops. When push came to shove the country needed donuts for its police force. They - Boris was never explicit when using this generic term whether he was invoking donuts or the police or both - were vital to the UK's infrastructure and without them the virus couldn't be beaten! Likewise, if the boys in blue weren't able to eat donuts in peace then Boris would never Get Brexit Done!
As he was chauffeured to number 10 Downing Street with his 7,500 ring donuts, Boris found himself getting all hot and sweaty. Something had come over him and he'd had one of those flashes of inspiration that were common to men of genius. He'd use the ring donuts to worship the goddess in her triple form - not the conventional maiden, mother and crone, but rather mouth, backside and naughty bits! Boris wasn't too good at maths - he couldn't even work out how many children he had - but he figured the 7,500 donuts would just about cover three out of seven external orifices for every woman he'd ever slept with. If he'd had more ring donuts he might have indulged himself with nasal sex too. Boris was going to work backwards and imagine doing gross and naughty things with all those he'd known Biblically until every last donut had been abused. Johnson had got as far as Jennifer Arcuri when Dominic Cummings burst in and caught Boris bollock naked rubbing a disintegrating ring donut up and down his manhood.
"That's a waste of good donuts that is!" Cummings spat as he took in the remains of several dozen ruined sweet fry cakes on the floor.
"A man with your surname ought to understand what it's like when I've got the horn," Boris whinged defensively, "and besides even a glutton like me couldn't eat 7,500 donuts with a use by date we'll have gone past at midnight!"
"The witching hour!" Cummings boomed. "That reminds me, those scientists you've got advising you on the pandemic have no respect. They may know about the laws of nature but I know about the laws of spirit, and that means I outrank them all!"
"That's well and good, but we must do something about the bad publicity my government is getting over a lack of personal protection equipment for health workers!'
"That's why I told you not to waste the donuts!"
"What have donuts got to do with PPE?" The Prime Minister wanted to know.
"We can turn them into PPE," Dom explained. "Let's string lots of donuts together to make protective gowns. Two rings fastened to each other will make fantastic googles."
"What about face masks, can we make donut face masks?" Boris asked excitedly.
"Don't be stupid," Dom chided, "anyone using donuts as a face mask would start licking off the sugar coating and then chewing on the cake. Donut face masks wouldn't last five minutes!"
The sugary smell of 7,500 ring donuts had attracted the attention of Larry the Downing Street cat who was mewling like a loon on the other side of the door. Boris let the feline into the room which was a mistake, since Larry was all over the stale fry cakes within seconds. Fortunately it was the ones Boris had used to frontage himself with that most interested the cat. These had traces of dead skin and even blood on them, since the sugar coating had caused a lot of friction when rubbed up and down the Prime Minister's love muscle.
Boris wasn't too hot in the fine motor skills department, in fact he probably needed testing for dyspraxia. Cummings certainly didn't want to risk being exposed as suffering form developmental co-ordination disorder and so his entrepreneurial bent led him to combine three economic sources that were of major significance to the UK - the charity sector's food banks for unwanted donuts, the government and immigrant labour. The Queensmead Sports Centre in South Ruislip's Victoria Road was closed due to the pandemic, and so Cummings decided to deploy it's unused gym as a base from which to make prototype versions of the ring donut PPE that would turn around the public's false perception of a poor governmental performance with regard to the current pandemic.
Dom hired a Gujarati woman from Park Royal who'd initially come to the UK to work at the mammoth hummus production part of Bakkavor's Cumberland site in Greenford. She was extremely nifty with a needle and regularly worked as a seamstress because it was difficult to live on the poor wages paid at local food processing plants. Before 24 hours had passed Johnson and Cummings had what they'd dreamed up the previous night, a medical gown and googles made of ring donuts! Well, the googles were made of ring donuts. At the suggestion of the seamstress, the gown had been fabricated from jam donuts since having holes the length and breath of the garment would have been a health risk to NHS heroes.
Although the donut PPE had been Dom's idea, Boris pulled rank and insisted that he be the first to try it out. Given it was made from literally hundreds of jam donuts, the gown proved to be pretty heavy but at least it was voluminous enough for a fatso like Johnson to wear. To keep Dom happy, Boris told him he was going to recommend his adviser for a Queens Award for Enterprise on the basis of his donut recycling activities in South Ruislip. The prototype PPE turned out to be perfect in every way, except for a slight tendency for the donuts at the bottom of the gown to fall off with a soft plop as Boris spun around in his triumph at having saved the National Health Service. That said, as libertarians he and Dom both knew that ultimately private health was much more efficient than haemorrhaging corporate profits to pay for public services. So once Boris had saved the NHS and after everything got back to normal in about 3 weeks time, he planned to abolish the NHS.
In his moment of glory for having saved the NHS, Boris decided to burst out of the gym and take a lap of honour on a Queensmead Sports Centre football pitch. After all he'd proved once again that England had won its wars - and the battle against Covid 19 was a war - on the playing fields of Eton! The fact that a fuckwit like Boris could get to be Prime Minister demonstrated that his parents had got real value for money when they'd paid for him to attend Britain's top public school. The fees were reassuringly expensive!
Two unfortunate things happened as Boris jigged across the football pitch. Firstly his smartphone rang, it was a call from a hefty female former professional kick boxer turned gym instructor with whom the PM was enjoying an intimate relationship.
"I found snot all over my dirty underwear when I was loading it into my washing machine just now. Have you been sniffing it again?"
This baseless accusation caused Johnson to sway and he'd never been a good runner at the best of times. He tripped over this own feet and fell to the ground. Seeing red oozing all around him, Boris thought he was a goner. While the British Prime Minister was able to pull the wool over his own eyes about his life ebbing away before him, he couldn't fool a passing swarm of wasps who knew that what Boris thought was his own blood was in fact strawberry jam. Recovering a slight semblance of sense at the sight of the descending wasps and wanting to save the prototype PPE, even if it was now squashed and in fragments, Johnson tried to shoo the insects away but this only made them angry. Boris quickly discovered that the painful pricks of failure were more or less equivalent to a dozen wasp stings.
In the interests of safety the plans for recycling donuts as PPE were shelved. It was back to the drawing board for Boris and Dom…. they still needed a way to demonstrate their political genius by defeating Covid 19.
Submarines also feature in a novella that's coming out sometime in the next month or two: Selkie Summer, from NewCon Press. Some years ago, between book contracts, I started writing a paranormal romance as an exercise, leaving it half-finished when the awaited boat came in. An online publisher showed interest in it as a novella, and was happy to wait until I'd finished The Corporation Wars. When I completed the novella two years ago, it was still front-loaded with an opening more suitable for a longer work, and for that and other reasons it didn't quite make the cut. I tinkered with it some more, and passed it to Ian Whates, who liked it and helpfully suggested further improvements. And after many vicissitudes, including a last-minute page-proof realisation that a Skye summer sunset was about an hour and a half later than I'd originally written, it's good to go! I'm very happy with the book's editing and production, and downright thrilled and delighted with Ben Baldwin's cover. I'm not sure if Selkie Summer meets the criteria for paranormal romance, but it's still about a young woman who falls for a paranormal entity. It's set in a contemporary Scotland much like ours, except that certain paranormal entities definitely exist and this is taken for granted as a fact of natural history. Partly as a consequence, there is no Skye Bridge.
It was due to be launched by me and Ian at Cymera 2020, which has now been cancelled (though it has some online content, and will have more, so keep checking it out). Meanwhile, you can hear me reading from the opening chapter in the online version of Edinburgh's monthly science fiction and fantasy cabaret, Event Horizon.
Another event that has had to move online is the Edinburgh Science Festival. I was honoured to be asked to give a short talk on a non-religious topic at the Festival's traditional St Giles service. I happened to have just read a book that got me thinking about contingency, Corliss Lamont's Freedom of Choice Affirmed, so I freely chose to talk about that. And as the contingencies we all know worked out, it's now online here.
Finally, a plug for a project I'm proud to have contributed to: the just-published Edwin Morgan Twenties, a set of five selections of twenty poems by the late great Makar, with introductions by Jackie Kay, Liz Lochhead, Ali Smith, Michael Rosen and me. You can buy the set for the bargain price of £16 (UK post free) or pick and mix. 'Space and spaces', the one I wrote the introduction to, brings together many of Morgan's science fiction and space poems -- and one or two that make a more metaphorical use of 'space' to brilliant effect. Like the other selections it's a mere £4 (UK post free) and is available here.
The fact that so many are completely incapable of keeping a safe distance from strangers - and this extends well beyond the cops and amateur cops - illustrates how alienated people are. Half the population seem to have no awareness of their own bodies or whose in the street or supermarket with them. Meanwhile, the homeless and mad are becoming ever more desperate and have either given up on begging and can be seen huddled together in encampments on Tottenham Court Road and elsewhere, or else have become much more aggressive in their quest for money to buy food and alcohol. The homeless are supposed to be in shelters but most still seem to be roaming around, presumably preferring the relatively greater freedom of the streets to being locked up under lockdown. There's a shortage of many street drugs but the government recognise booze as an essential and so London's off-licences (liquor stores) are open. Anyone hoping to sleep on the streets probably needs a drink or two in order to nod off, while the rest of the population are also living out the insane nightmare that is late-capitalism and dependence on alcohol is one way of dealing with it.
Covid 19 has brought out the best in many people and the worst in others. There are wonderful community mutual aid groups doing shopping for the vulnerable and delivering presents to children. Meanwhile hysterical media coverage links burning telecommunications masts and infrastructure to ridiculous conspiracy theories about 5G causing Covid 19. At least one of the fires the press was wringing its hands over recently and blaming on anti-5G activists turned out to have been due to faulty equipment and not suspicious. The papers call those who oppose 5G idiots because the equipment that was maliciously targeted is largely 3G and 4G. Since no one has yet been charged with - let alone convicted - of these acts of arson against phone masts, the fact that it isn't 5G equipment that was torched might well imply those involved in the vandalism aren't opposed to 5G and had other motives. The links made in the press between this arson and anti-5G activism are at best speculative.
There are many reasons for setting telecommunications infrastructure alight but even when it's just teens doing it for kicks it doesn't follow that we shouldn't be thrilled by film and photos of the resultant fires. Baudrillard said more than 50 years ago: "Something in all (wo)men profoundly rejoices at seeing a car burn.." This rings true because cars are a symbol of possessive individualism and have wrought untold destruction on our planet. Given the negative social impact of smartphones - including but not limited to surveillance and an intensification of work - today nothing is more beautiful than a burning phone mast. Technology isn't neutral, it shapes societies and human relations, and so the health and wellness concerns of anti-5G activists aren't the reason I get a buzz when I see burning telecommunications infrastructure. Nonetheless media hysteria about torched masts and Covid 19 conspiracy theories mean it's now nearly impossible to have a nuanced conversation about the joys and broader political dimensions of such vandalism, why everyone should get rid of their smartphone or what's actually bad about 3G, 4G and 5G.
Under lockdown I like to run around the streets for an hour a day, since it's quite a kick to see most of the shops and all the restaurants in central London closed, and knowing that even if they were open I wouldn't be using most of them. I don't even miss the record and book stores I did sometimes visit before Covid 19. I totally dig jogging through Soho and Covent Garden to visit the places I went as a teen 40 and more years ago and to revel in the fact that the London I knew then has entirely disappeared, just as the hyper-capitalist London of the current millennium is about to disappear. A different and better world is not only possible, it is also very necessary….


I’m doing fine, asymptomatic at the moment and hoping to stay that way until a working vaccine shows up. Hope you’re the same, or better yet, that you had a mild case and are now immune.
Anyway, good wishes aside, I wanted to say something I haven’t dared say for weeks: as bad as this crisis is, I suspect it’s a training wheels exercise for what we’ll have to do to deal with climate change. What I think right now is that if we seriously try to flatten the curve on greenhouse gas emissions, that effort is going to be like what we’re going through now, but longer and more thoroughly disruptive. It pretty much has to be if we’re going to avoid a mass extinction.
However, if you’re an artist looking for inspiration out of the darkness, that isn’t a bad thing.First, the bad news. No, not the Covid-19 economic induced coma. I’m talking about the Nature paper that came out yesterday (link to an article about it). Basically, it says we’ve got about 10 years until we see tropical ocean ecosystems (coral reefs) start to collapse wholesale. If you know anything about mass extinctions, you know that the disappearance of biogenic reefs in the fossil record is the classic sign of a major/mass extinction event. So that’s maybe 10 years off, although the reefs are mostly in bad shape now. By 2050, the ecosystem disintegration will reach the temperate zone, and the mass extinction will swing into high gear. When I talk about bending the curve, trying to avoid a mass extinction in 10 years, with which will start with the loss of a huge amount of fish that’s feeding hundreds of millions of people, that’s what I’m talking about avoiding. It’s about stopping the death of the coral reefs, and stopping the spread of the disaster.
Now if you use the paradigm I’ve used for the last ten years, you can assume this disaster will definitely happen, and that leads to Hot Earth Dreams land and the high altithermal starting in a decade.
But let’s look at the other side, where people survive however many waves of Covid-19 we go through until we get to a vaccine, and that breaks the inevitability, makes us think that maybe we can actually make a difference in the world. Scientists get respected enough again, and air quality improves so radically that people decide, not as a unified movement but en masse, to get serious about not dying due to climate change. The air quality improvement is quite real, and there’s enough footage of nature bouncing back (pandas mating, coyotes howling in San Francisco’s North Beach) that it’s just possible that people will get the idea that we can actually make a difference.
And so we start to bend the curve on climate pollution, let our fasts from consumerism get longer and longer, listen to the experts more than the reality stars, alternately slack and scramble to survive. Fantasy? You’re living it now.
I’m not going to portray this as easy. People are going hungry right now, whole careers and industries are in limbo. People are dying around the world, and we don’t even know what’s happening in the slums. While we have serious economic disruptors, people taking science seriously, and people showing their best in times of adversity, we also have predatory capitalism at it’s worst, with the current U.S. administration possibly behaving more like mafiosi than leaders.
And that’s the point, especially if you’re looking for disaster to inspire art, trying to figure out what will get made out of the broken shards of 2019 consumer culture, starting in 2021. Take all the disruption right now, the kleptocracy and vulture capitalism, the suffering of the essential workers keeping us alive, the kindness, heroism, and acts of creativity, the disruption of whole economic sectors surplused and gone, the world gone strangely silent and healing itself. Now ramp that up by orders of magnitude. That’s what flattening the curve on greenhouse gases will look like in the 2020s.
Now it’s not easy, but it’s not dystopian either. It’s a bit different: science is in charge, people are struggling to make a better, solve the problems that are killing us now. That’s a classic science fiction theme. But I’m pretty sure that struggle will involve as much suffering as failing and letting the world die.
As the sign in the psychotherapist’s office says, either way it hurts. Death from climate change will be slow, horrible, and painful, with you living to watch everything you care about get destroyed around you. Fighting climate change will be slow, horrible, and painful, as you watch everything you grew up with either fall apart or change into new forms that will survive. If they’re equal, why not choose change. Why not struggle against huge forces to keep the world from being totally destroyed, give the coral reefs a chance to come back, the forests a chance to regrow without migrating to the poles. Why not struggle for a civilization that doesn’t kill itself?
Why not let this inspire you?
A differential analyzer is a mechanical, analog computer that can solve differential equations. Differential analyzers aren't used anymore because even a cheap laptop can solve the same equations much faster—and can do it in the background while you stream the new season of Westworld on HBO. Before the invention of digital computers though, differential analyzers allowed mathematicians to make calculations that would not have been practical otherwise.
It is hard to see today how a computer made out of anything other than digital circuitry printed in silicon could work. A mechanical computer sounds like something out of a steampunk novel. But differential analyzers did work and even proved to be an essential tool in many lines of research. Most famously, differential analyzers were used by the US Army to calculate range tables for their artillery pieces. Even the largest gun is not going to be effective unless you have a range table to help you aim it, so differential analyzers arguably played an important role in helping the Allies win the Second World War.
To understand how differential analyzers could do all this, you will need to know what differential equations are. Forgotten what those are? That's okay, because I had too.
Differential EquationsDifferential equations are something you might first encounter in the final few weeks of a college-level Calculus I course. By that point in the semester, your underpaid adjunct professor will have taught you about limits, derivatives, and integrals; if you take those concepts and add an equals sign, you get a differential equation.
Differential equations describe rates of change in terms of some other variable (or perhaps multiple other variables). Whereas a familiar algebraic expression like specifies the relationship between some variable quantity and some other variable quantity , a differential equation, which might look like , or even , specifies the relationship between a rate of change and some other variable quantity. Basically, a differential equation is just a description of a rate of change in exact mathematical terms. The first of those last two differential equations is saying, "The variable changes with respect to at a rate defined exactly by ," and the second is saying, "No matter what is, the variable changes with respect to at a rate of exactly 2."
Differential equations are useful because in the real world it is often easier to describe how complex systems change from one instant to the next than it is to come up with an equation describing the system at all possible instants. Differential equations are widely used in physics and engineering for that reason. One famous differential equation is the heat equation, which describes how heat diffuses through an object over time. It would be hard to come up with a function that fully describes the distribution of heat throughout an object given only a time , but reasoning about how heat diffuses from one time to the next is less likely to turn your brain into soup—the hot bits near lots of cold bits will probably get colder, the cold bits near lots of hot bits will probably get hotter, etc. So the heat equation, though it is much more complicated than the examples in the last paragraph, is likewise just a description of rates of change. It describes how the temperature of any one point on the object will change over time given how its temperature differs from the points around it.
Let's consider another example that I think will make all of this more concrete. If I am standing in a vacuum and throw a tennis ball straight up, will it come back down before I asphyxiate? This kind of question, posed less dramatically, is the kind of thing I was asked in high school physics class, and all I needed to solve it back then were some basic Newtonian equations of motion. But let's pretend for a minute that I have forgotten those equations and all I can remember is that objects accelerate toward earth at a constant rate of , or about . How can differential equations help me solve this problem?
Well, we can express the one thing I remember about high school physics as a differential equation. The tennis ball, once it leaves my hand, will accelerate toward the earth at a rate of . This is the same as saying that the velocity of the ball will change (in the negative direction) over time at a rate of . We could even go one step further and say that the rate of change in the height of my ball above the ground (this is just its velocity) will change over time at a rate of negative . We can write this down as the following, where represents height and represents time:
This looks slightly different from the differential equations we have seen so far because this is what is known as a second-order differential equation. We are talking about the rate of change of a rate of change, which, as you might remember from your own calculus education, involves second derivatives. That's why parts of the expression on the left look like they are being squared. But this equation is still just expressing the fact that the ball accelerates downward at a constant acceleration of .
From here, one option I have is to use the tools of calculus to solve the differential equation. With differential equations, this does not mean finding a single value or set of values that satisfy the relationship but instead finding a function or set of functions that do. Another way to think about this is that the differential equation is telling us that there is some function out there whose second derivative is the constant ; we want to find that function because it will give us the height of the ball at any given time. This differential equation happens to be an easy one to solve. By doing so, we can re-derive the basic equations of motion that I had forgotten and easily calculate how long it will take the ball to come back down.
But most of the time differential equations are hard to solve. Sometimes they are even impossible to solve. So another option I have, given that I paid more attention in my computer science classes that my calculus classes in college, is to take my differential equation and use it as the basis for a simulation. If I know the starting velocity and the acceleration of my tennis ball, then I can easily write a little for-loop, perhaps in Python, that iterates through my problem second by second and tells me what the velocity will be at any given second after the initial time. Once I've done that, I could tweak my for-loop so that it also uses the calculated velocity to update the height of the ball on each iteration. Now I can run my Python simulation and figure out when the ball will come back down. My simulation won't be perfectly accurate, but I can decrease the size of the time step if I need more accuracy. All I am trying to accomplish anyway is to figure out if the ball will come back down while I am still alive.
This is the numerical approach to solving a differential equation. It is how differential equations are solved in practice in most fields where they arise. Computers are indispensable here, because the accuracy of the simulation depends on us being able to take millions of small little steps through our problem. Doing this by hand would obviously be error-prone and take a long time.
So what if I were not just standing in a vacuum with a tennis ball but were standing in a vacuum with a tennis ball in, say, 1936? I still want to automate my computation, but Claude Shannon won't even complete his master's thesis for another year yet (the one in which he casually implements Boolean algebra using electronic circuits). Without digital computers, I'm afraid, we have to go analog.
The Differential AnalyzerThe first differential analyzer was built between 1928 and 1931 at MIT by Vannevar Bush and Harold Hazen. Both men were engineers. The machine was created to tackle practical problems in applied mathematics and physics. It was supposed to address what Bush described, in a 1931 paper about the machine, as the contemporary problem of mathematicians who are "continually being hampered by the complexity rather than the profundity of the equations they employ."
A differential analyzer is a complicated arrangement of rods, gears, and spinning discs that can solve differential equations of up to the sixth order. It is like a digital computer in this way, which is also a complicated arrangement of simple parts that somehow adds up to a machine that can do amazing things. But whereas the circuitry of a digital computer implements Boolean logic that is then used to simulate arbitrary problems, the rods, gears, and spinning discs directly simulate the differential equation problem. This is what makes a differential analyzer an analog computer—it is a direct mechanical analogy for the real problem.
How on earth do gears and spinning discs do calculus? This is actually the easiest part of the machine to explain. The most important components in a differential analyzer are the six mechanical integrators, one for each order in a sixth-order differential equation. A mechanical integrator is a relatively simple device that can integrate a single input function; mechanical integrators go back to the 19th century. We will want to understand how they work, but, as an aside here, Bush's big accomplishment was not inventing the mechanical integrator but rather figuring out a practical way to chain integrators together to solve higher-order differential equations.
A mechanical integrator consists of one large spinning disc and one much smaller spinning wheel. The disc is laid flat parallel to the ground like the turntable of a record player. It is driven by a motor and rotates at a constant speed. The small wheel is suspended above the disc so that it rests on the surface of the disc ever so slightly—with enough pressure that the disc drives the wheel but not enough that the wheel cannot freely slide sideways over the surface of the disc. So as the disc turns, the wheel turns too.
The speed at which the wheel turns will depend on how far from the center of the disc the wheel is positioned. The inner parts of the disc, of course, are rotating more slowly than the outer parts. The wheel stays fixed where it is, but the disc is mounted on a carriage that can be moved back and forth in one direction, which repositions the wheel relative to the center of the disc. Now this is the key to how the integrator works: The position of the disc carriage is driven by the input function to the integrator. The output from the integrator is determined by the rotation of the small wheel. So your input function drives the rate of change of your output function and you have just transformed the derivative of some function into the function itself—which is what we call integration!
If that explanation does nothing for you, seeing a mechanical integrator in action really helps. The principle is surprisingly simple and there is no way to watch the device operate without grasping how it works. So I have created a visualization of a running mechanical integrator that I encourage you to take a look at. The visualization shows the integration of some function into its antiderivative while various things spin and move. It's pretty exciting.
A nice screenshot of my visualization, but you should check out the real
thing!
So we have a component that can do integration for us, but that alone is not enough to solve a differential equation. To explain the full process to you, I'm going to use an example that Bush offers himself in his 1931 paper, which also happens to be essentially the same example we contemplated in our earlier discussion of differential equations. (This was a happy accident!) Bush introduces the following differential equation to represent the motion of a falling body:
This is the same equation we used to model the motion of our tennis ball, only Bush has used in place of and has added another term that accounts for how air resistance will decelerate the ball. This new term describes the effect of air resistance on the ball in the simplest possible way: The air will slow the ball's velocity at a rate that is proportional to its velocity (the here is some proportionality constant whose value we don't really care about). So as the ball moves faster, the force of air resistance will be stronger, further decelerating the ball.
To configure a differential analyzer to solve this differential equation, we have to start with what Bush calls the "input table." The input table is just a piece of graphing paper mounted on a carriage. If we were trying to solve a more complicated equation, the operator of the machine would first plot our input function on the graphing paper and then, once the machine starts running, trace out the function using a pointer connected to the rest of the machine. In this case, though, our input is just the constant , so we only have to move the pointer to the right value and then leave it there.
What about the other variables and ? The variable is our output as it represents the height of the ball. It will be plotted on graphing paper placed on the output table, which is similar to the input table only the pointer is a pen and is driven by the machine. The variable should do nothing more than advance at a steady rate. (In our Python simulation of the tennis ball problem as posed earlier, we just incremented in a loop.) So the variable comes from the differential analyzer's motor, which kicks off the whole process by rotating the rod connected to it at a constant speed.
Bush has a helpful diagram documenting all of this that I will show you in a second, but first we need to make one more tweak to our differential equation that will make the diagram easier to understand. We can integrate both sides of our equation once, yielding the following:
The terms in this equation map better to values represented by the rotation of various parts of the machine while it runs. Okay, here's that diagram:
The differential analyzer configured to solve the problem of a falling body in
one dimension.
The input table is at the top of the diagram. The output table is at the bottom-right. The output table here is set up to graph both and , i.e. height and velocity. The integrators appear at the bottom-left; since this is a second-order differential equation, we need two. The motor drives the very top rod labeled . (Interestingly, Bush referred to these horizontal rods as "buses.")
That leaves two components unexplained. The box with the little in it is a multiplier respresnting our proportionality constant . It takes the rotation of the rod labeled and scales it up or down using a gear ratio. The box with the symbol is an adder. It uses a clever arrangement of gears to add the rotations of two rods together to drive a third rod. We need it since our equation involves the sum of two terms. These extra components available in the differential analyzer ensure that the machine can flexibly simulate equations with all kinds of terms and coefficients.
I find it helpful to reason in ultra-slow motion about the cascade of cause and effect that plays out as soon as the motor starts running. The motor immediately begins to rotate the rod labeled at a constant speed. Thus, we have our notion of time. This rod does three things, illustrated by the three vertical rods connected to it: it drives the rotation of the discs in both integrators and also advances the carriage of the output table so that the output pen begins to draw.
Now if the integrators were set up so that their wheels are centered, then the rotation of rod would cause no other rods to rotate. The integrator discs would spin but the wheels, centered as they are, would not be driven. The output chart would just show a flat line. This happens because we have not accounted for the initial conditions of the problem. In our earlier Python simulation, we needed to know the initial velocity of the ball, which we would have represented there as a constant variable or as a parameter of our Python function. Here, we account for the initial velocity and acceleration by displacing the integrator discs by the appropriate amount before the machine begins to run.
Once we've done that, the rotation of rod propagates through the whole system. Physically, a lot of things start rotating at the same time, but we can think of the rotation going first to integrator II, which combines it with the acceleration expression calculated based on and then integrates it to get the result . This represents the velocity of the ball. The velocity is in turn used as input to integrator I, whose disc is displaced so that the output wheel rotates at the rate . The output from integrator I is our final output , which gets routed directly to the output table.
One confusing thing I've glossed over is that there is a cycle in the machine: Integrator II takes as an input the rotation of the rod labeled , but that rod's rotation is determined in part by the output from integrator II itself. This might make you feel queasy, but there is no physical issue here—everything is rotating at once. If anything, we should not be surprised to see cycles like this, since differential equations often describe rates of change in a function as a function of the function itself. (In this example, the acceleration, which is the rate of change of velocity, depends on the velocity.)
With everything correctly configured, the output we get is a nice graph, charting both the position and velocity of our ball over time. This graph is on paper. To our modern digital sensibilities, that might seem absurd. What can you do with a paper graph? While it's true that the differential analyzer is not so magical that it can write out a neat mathematical expression for the solution to our problem, it's worth remembering that neat solutions to many differential equations are not possible anyway. The paper graph that the machine does write out contains exactly the same information that could be output by our earlier Python simulation of a falling ball: where the ball is at any given time. It can be used to answer any practical question you might have about the problem.
The differential analyzer is a preposterously cool machine. It is complicated, but it fundamentally involves nothing more than rotating rods and gears. You don't have to be an electrical engineer or know how to fabricate a microchip to understand all the physical processes involved. And yet the machine does calculus! It solves differential equations that you never could on your own. The differential analyzer demonstrates that the key material required for the construction of a useful computing machine is not silicon but human ingenuity.
Murdering PeopleHuman ingenuity can serve purposes both good and bad. As I have mentioned, the highest-profile use of differential analyzers historically was to calculate artillery range tables for the US Army. To the extent that the Second World War was the "Good Fight," this was probably for the best. But there is also no getting past the fact that differential analyzers helped to make very large guns better at killing lots of people. And kill lots of people they did—if Wikipedia is to be believed, more soldiers were killed by artillery than small arms fire during the Second World War.
I will get back to the moralizing in a minute, but just a quick detour here to explain why calculating range tables was hard and how differential analyzers helped, because it's nice to see how differential analyzers were applied to a real problem. A range table tells the artilleryman operating a gun how high to elevate the barrel to reach a certain range. One way to produce a range table might be just to fire that particular kind of gun at different angles of elevation many times and record the results. This was done at proving grounds like the Aberdeen Proving Ground in Maryland. But producing range tables solely through empirical observation like this is expensive and time-consuming. There is also no way to account for other factors like the weather or for different weights of shell without combinatorially increasing the necessary number of firings to something unmanageable. So using a mathematical theory that can fill in a complete range table based on a smaller number of observed firings is a better approach.
I don't want to get too deep into how these mathematical theories work, because the math is complicated and I don't really understand it. But as you might imagine, the physics that governs the motion of an artillery shell in flight is not that different from the physics that governs the motion of a tennis ball thrown upward. The need for accuracy means that the differential equations employed have to depart from the idealized forms we've been using and quickly get gnarly. Even the earliest attempts to formulate a rigorous ballistic theory involve equations that account for, among other factors, the weight, diameter, and shape of the projectile, the prevailing wind, the altitude, the atmospheric density, and the rotation of the earth1.
So the equations are complicated, but they are still differential equations that a differential analyzer can solve numerically in the way that we have already seen. Differential analyzers were put to work solving ballistics equations at the Aberdeen Proving Ground in 1935, where they dramatically sped up the process of calculating range tables.2 Nevertheless, during the Second World War, the demand for range tables grew so quickly that the US Army could not calculate them fast enough to accompany all the weaponry being shipped to Europe. This eventually led the Army to fund the ENIAC project at the University of Pennsylvania, which, depending on your definitions, produced the world's first digital computer. ENIAC could, through rewiring, run any program, but it was constructed primarily to perform range table calculations many times faster than could be done with a differential analyzer.
Given that the range table problem drove much of the early history of computing even apart from the differential analyzer, perhaps it's unfair to single out the differential analyzer for moral hand-wringing. The differential analyzer isn't uniquely compromised by its military applications—the entire field of computing, during the Second World War and well afterward, advanced because of the endless funding being thrown at it by the United States military.
Anyway, I think the more interesting legacy of the differential analyzer is what it teaches us about the nature of computing. I am surprised that the differential analyzer can accomplish as much as it can; my guess is that you are too. It is easy to fall into the trap of thinking of computing as the realm of what can be realized with very fast digital circuits. In truth, computing is a more abstract process than that, and electronic, digital circuits are just what we typically use to get it done. In his paper about the differential analyzer, Vannevar Bush suggests that his invention is just a small contribution to "the far-reaching project of utilizing complex mechanical interrelationships as substitutes for intricate processes of reasoning." That puts it nicely.
If you enjoyed this post, more like it come out every four weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.
Previously on TwoBitHistory…
Do you worry that your children are "BBS-ing"? Do you have a neighbor who talks too much about his "door games"?
— TwoBitHistory (@TwoBitHistory) February 2, 2020
In this VICE News special report, we take you into the seedy underworld of bulletin board systems:https://t.co/hBrKGU2rfB
-
Alan Gluchoff. "Artillerymen and Mathematicians: Forest Ray Moulton and Changes in American Exterior Ballistics, 1885-1934." Historia Mathematica, vol. 38, no. 4, 2011, pp. 506-547., https://www.sciencedirect.com/science/article/pii/S0315086011000279. ↩
-
Karl Kempf. "Electronic Computers within the Ordnance Corps," 1961, accessed April 6, 2020, https://ftp.arl.army.mil/~mike/comphist/61ordnance/index.html. ↩
There’s not too much I can say about Covid-19, but there are some things that need to be said.
First, stay safe. This is going to go on for months, and it’s unlikely we’re going to go back to things they were the way before even after there are effective vaccines, treatment, and herd immunity.
Second, right now, the scale of the response to Covid-19 appears to be greater than what we’d need to adapt civilization to climate change if we started right now. So if anyone posits that we’re all doomed because we can’t mount such an effort, point to what’s happening now and ask them about the basis for their belief.
Now granted, the Covid-19 response is not sustainable, for reasons I go into in Hot Earth Dreams. In that book I compare cities to coral reefs. Both are metastable, composite, living structures that depend on the constant input of energy and circulation of nutrients and organisms (often the same thing for the carnivores consumers). Cut off the circulation and especially inflows and outflows to the outside, and both systems will fall apart in short order, simplifying down until they’re sustainable within their new boundaries. For reefs, that means that, if you boxed up a reef in a tank without circulation and food inputs, almost everything dies and what’s left is largely microbial life. For a city under a dome, almost everyone dies, and if anyone does survive, they’re in a village farming the ruins. This is the extreme, but the problem facing pandemic control is to keep the virus from spreading by minimizing human movement and contact, without so strangling necessary movement of food, water, and supplies that people starve. Striking this balance successfully for months to a year or more is going to be really interesting. Doable, but really interesting.
No matter whether polities pull it off while minimizing loss of life or not, the result will shape politics. In some places, strong men will use the emergency to boost their own power, whether they’re effective or not. This is apparently happening in Israel right now, where Netanyahu is clinging to power on a platform of something like “don’t depose and prosecute me while I’m dealing with this heaven-sent new emergency.” Other countries may look at how Singapore, South Korea, and China dealt with it, and more emulate their system of a surveillance state with local democracy overseen and limited by a higher level bureaucracy. Indeed, if Biden wins the election and the US Congress doesn’t manage to find it’s ass with at least one hand, I expect something like this to be installed in the US, again because we’re going to be dealing with the problem of rapid-moving pandemics as long as there are billions of people in the world and cheap air travel, and we’re getting an epic-level demonstration of why a lack of good governance is a very bad thing indeed.
Third: one big, unsolvable problem is that Chinese horseshoe bats (multiple species) reportedly have thousands of endemic coronaviruses circulating and recombining within their populations. It’s something like their version of colds. Almost all of these viruses are not capable of infecting humans, but some are, sort of. This is where SARS came from originally (SARS and Covid-19 are as close to each other as strains of flu). However, the viruses closest to SARS-CoV-2 in horseshoe bats don’t produce Covid-19 exactly. They’re also transient in the bats. The virus appears to have jumped to an intermediate host (perhaps a pangolin, although the pangolin coronavirus found so far isn’t SARS-CoV-2 either) and jumped from there to a human. Or it’s entirely possible that the final SARS-Cov-2 recombined into being inside an early human host. Getting confused? Well, the same process happened with SARS too, with an ephemeral bat virus infecting a civet that got sick with SARS (the intermediate host that made lots of copies of a single virus) and passed it to a human. And Chinese people who live near horseshoe bat roosts apparently show antibodies to various bat coronaviruses. In other words, bat coronaviruses infecting humans appears to be a natural process that’s been happening for…centuries? Millennia? Up until now, it was a rural problem for a few villages, but with global civilization, it can now spill over and rapidly blow up into a pandemic. Getting rid of the bats almost certainly won’t solve a problem, either, because that problem is ultimately too many people moving around the world too fast and in too close contact. Worse, there are other viral sources in the world, and we do desperately need the insect-eating ecosystem services that bats provide, because insects transmit a whole host of illnesses on their own. We’re entirely embedded in the biosphere, and randomly killing off bits of it for greed or fear backfires, often lethally.
Finally, as many know, I’m fond of the metaphor of the Four Horsemen: Epidemic Disease, Famine, Social Unrest, and Death, that ride together. This is NOT to say that Covid-19 is the start of the apocalypse, but it does show how the metaphor works. When there’s an epidemic, or food shortage, or a war, the other two problems show up, and if they’re not dealt with rapidly, a lot of people die. So for example, if a pandemic showed up and went uncontrolled, a few people would hoard resources for the purpose of price gouging and profiteering (the normal causes of famines), as people got sick and died there would be a breakdown of social order, potentially extreme enough to lead to a civil war, and a lot of people would die, some from the disease, but perhaps more from the resulting famine and war. Or you could start with the war. Or you could start with the crop failure. For any of the three causes, the way people actually died as a result (for instance, being shot while looting because they’re desperate for food) might have little apparently to do with the ultimate cause (the pandemic that shut down food shipments and the resulting hoarding). With Covid-19 so far, we’ve seen a very mild version of this play out, with people hoarding toilet paper, hand sanitizer, and food. Don’t take this lightly, because in a more serious situation, the Four Horseman can bring down civilizations (as apparently with the classic Maya–this has been used as an explanation for how the lowland Maya collapsed, starting with an epic drought and going from there). Rather, look at it as a worked example of another endemic human problem.
The effective responses to these problems as they arise, oddly enough, are very Christian: band together in community (even if you have to maintain physical distance), take care of each other, share what you have, punish and exclude people who would cause problems, or welcome them in if they stop causing problems and make restitution for what they’ve done. That’s what we need to do, going forward.
And stay safe out there, okay?
Views: 28
As vast numbers of people are suddenly working from home in reaction to the coronavirus pandemic, doctors switch to heavy use of video office visits, and in general more critical information than ever is suddenly being thrust onto the Internet, the risks of major security and privacy disasters that will long outlast the pandemic are rising rapidly.
For example, the U.S. federal government is suspending key aspects of medical privacy laws to permit use of “telemedicine” via commercial services that have never been certified to be in compliance with the strict security and privacy rules associated with HIPAA (Health Insurance Portability and Accountability Act). The rush to provide more remote access to medical professionals is understandable, but we must also understand the risks of data breaches that once having occurred can never be reversed.
Sloppy computer security practices that have long been warned against are now coming home to roost, and the crooks as usual are way ahead of the game.
The range of attack vectors is both broad and deep. Many firms have never prepared for large-scale work at home situations, and employees using their own PCs, laptops, phones, or other devices to access corporate networks can represent a major risk to company and customer data.
Fake web sites purporting to provide coronavirus information and/or related products are popping up in large numbers around the Net, all with nefarious intents to spread malware, steal your accounts, or rob you in other ways.
Even when VPNs (Virtual Private Networks) are in use, malware on employee personal computers may happily transit VPNs into corporate networks. Commercial VPN services introduce their own risk factors, both due to potential flaws in their implementations and the basic technical limitations inherent in using a third-party service for such purposes. Whenever possible, third-party VPN services are to be avoided by corporate users, and these firms and other organizations using VPNs should deploy “in-house” VPN systems if they truly have the technical expertise to do so safely.
But far better than VPNs are “zero trust” security models such as Google’s “BeyondCorp” (https://cloud.google.com/beyondcorp), that can provide drastically better security without the disadvantages and risks of VPNs.
There are even more basic issues in focus. Most users still refuse to enable 2-factor (aka “2-step”) verification systems (https://www.google.com/landing/2step/) on services that support it, putting them at continuous risk of successful phishing attacks that can result in account hijacking and worse.
I’ve been writing about all of this for many years here in this blog and in other venues. I’m not going to make a list here of my many relevant posts over time — they’re easy enough to find.
The bottom line is that the kind of complacency that has been the hallmark of most firms and most users when it comes to computer security is even less acceptable now than ever before. It’s time to grow up, bite the bullet, and expend the effort — which in some cases isn’t a great deal of work at all! — to secure your systems, your data, and yes, your life and the lives of those that you care about.
Stay well.
–Lauren–
TLDR – the fact that there are people cycling fast on cycling infrastructure in London does not mean that the infrastructure is ‘creating’ or ‘causing’ fast cycling. The people cycling fast are the people who were already cycling in London, brave enough to deal with the (almost entirely hostile) road network. Instead of ‘causing’ fast cycling, cycling infrastructure actually enables a diverse range of users to cycle at whatever speed they wish to pedal at. It lowers cycling speeds, rather than raising them.
Last autumn, my partner and I cycled together in London. It was her very first time cycling in the capital. I think it’s more than fair to describe her as a nervous cyclist – while she cycled in her youth, the bike was pretty much discarded once she became a teenager, and she only started again, intermittently, several decades later, on holiday trips we took to the Netherlands.

On one of our Dutch cycling trips
Given her nervousness, the very first bit of road we ended up cycling on in London would have been an absolutely crazy choice five years ago. Upper Thames Street, a four lane canyon of motor traffic running straight through the City of London, is (or at least was) notorious for danger. The cycle courier Sebastian Lukomski was killed here in 2004, crushed under an HGV, a death that Bill Chidley identifies with the start of the politicisation of cycling safety in London. In 2008, Nick Wright was killed a matter of yards from where Lukomski died – again, crushed under the wheels of an HGV. And these horrible collisions kept happening. Again in 2013. And again in 2014. All on the same stretch of road.
But last autumn we cycled along this very same road, in almost complete safety. At rush hour.
Just now my very nervous, very wobbly partner cycled for the very first time in London. On Upper Thames Street pic.twitter.com/uUoap7BgHM
— Mark Treasure (@AsEasyAsRiding) September 6, 2018
To state the obvious, I can guarantee you that there was no way this would have happened, without the subjective and objective safety offered by the protected cycleway, which meant we did not have to cycle down this canyon, mixed in with HGVs, coaches, vans and cars. We had our own space, where we could trundle on our fully-laden Dutch bikes at a sedate 10mph, roughly the same speed as the slow moving motor traffic on the other side of the kerb.
Without that kerb, I don’t think there is any chance that this young child would be cycling along Upper Thames Street either.
Or these children.
Or these children.
The crucial difference the cycleway has made is that people are now free to cycle at their own pace. Just like my partner, they can trundle along fairly slowly, without worrying about HGVs and coaches steaming up behind them. The cycleway has enabled, and will enable, people to cycle at slower speeds – the very people who would never even have considered cycling here, and on similar roads, without it.
It’s more than a little troubling, therefore, to see an emerging narrative that these kinds of cycleways are ‘creating’ a mentality of fast cycling – that their design (and even their name), are somehow fomenting or encouraging a type of cycling that wouldn’t exist if the cycling infrastructure hadn’t been built.
It's entirely possible this thread will set me off on a cycle super-highway rant again.
I mean you can just feel the testosterone in the name. Never mind the appalling design and behaviour they encourage in cyclists who use them. (I am a cyclist).
— Sarah Hayward (@Sarah_Hayward) March 2, 2019
The latest example is a piece from Jill Rutter for Reaction. The piece makes very some sensible arguments, but has had a silly headline added (at a guess, by an editor who has previously demonstrated some antagonism towards cycling infrastructure in London), and unfortunately creates the overall impression that safe, attractive and convenient cycling infrastructure, rather than enabling nervous people to cycle, is instead fostering a problem of fast and aggressive cycling. This impression was slightly reinforced by some comments the author later made on social media.
Hi Will – not sure if you read my piece in @reactionlife https://t.co/zST6mIcnPC — was discussing with other (female cycling) friends – who said they wd oppose @TfL new infra because of the racer mentality it created..
— Jill Rutter (@jillongovt) May 9, 2019
Unlike those journalists and politicians who are opponents of cycleways and would like to see it removed, and who are therefore making these kinds of arguments about ‘racing culture’ in bad faith, it’s pretty clear to me that Rutter is sincere. She supports cycling infrastructure, wants to see more of it, but is troubled personally by the types of cycling she is seeing at the moment, and worries that it may be putting people off cycling in London.
The problem is that new cycling infrastructure is obviously not ‘creating’ a racing mentality. That mentality is created not by the few miles of safe cycling conditions that have been built in central London, but by the abject reality of the rest of the road network, which does involve mixing with fast motor traffic, and large vehicles, on dreadfully-designed junctions and roads that make no concession whatsoever to the safety of people cycling. The ‘superhighways’ aren’t the reality of cycling in London – they’re only a small respite from it.

We haven’t suddenly transplanted the Netherlands onto London. We’ve built a few miles of good stuff, in pockets here and there, often without even joining it up, and… that’s it. The aggressive, fast people we encounter on ‘superhighways’ are the people who were cycling already, the people who will almost certainly be cycling without the subjective safety offered by motor traffic-free conditions.
Plainly, we shouldn’t be musing about why we haven’t got a demographic of relaxed, Dutch-type cycling in London when we’ve barely started changing our roads. We haven’t even scratched the surface. Let’s not start pretending that the infrastructure which allows my friends, family and partner to cycle slowly, and in safety, is somehow encouraging or fostering a type of cycling that is in reality a natural consequence of the rest of the road network.
The gyratory system around Victoria station in Westminster has been a genuinely horrible place to cycle for as long as I can remember. Getting to and from the station, or cycling past it, involves dealing with multiple lanes of one-way motor traffic, zooming off towards Park Lane, or thundering south towards Vauxhall Bridge.
The gyratory makes absolutely no concessions to cycling. If, for instance, you want to get from the station to the safety of Cycle Superhighway 3 – central London’s flagship cycle route, you have to make your way around two sides of a terrifying triangle, holding a position in the right hand lane of traffic heading north onto Grosvenor Place, before taking primary position on the left hand side as you skirt the edge of Buckingham Palace.

Cycling from Victoria to CS3
Cycle Superhighway 5 should have arrived in this area from Vauxhall Bridge, and should – quite sensibly – have connected up with Superhighway 3 in the vicinity of Buckingham Palace. However, it seems to have stalled right on the boundary of (guess who!) Westminster City Council, leaving anyone attempting to get between the two to negotiate a mile or so of unpleasant roads without any mitigation for cycling whatsoever.
Right in the middle of the Victoria gryatory stands the new Nova development. An incidental detail is that one of the buildings here won 2017’s Carbuncle Cup for the UK’s ugliest building, but I doubt that anyone cycling past has any time to assess its aesthetic qualities, given that they are busily trying to stay alive. Like Superhighway 5, this development should have represented an opportunity to make the roads around Victoria a bit less lethal for anyone attempting to cycle here. There’s even a detailed 60-page Transport for London strategy document dating from 2014, the Victoria Vision Cycling Strategy (link opens a download automatically), which explicitly sets out the key challenges and requirements in the Victoria area, in the context of the then-Mayor’s Vision for Cycling.
However, while there have been some improvements in the area around the Nova development – in particular, widened footways, better public realm, and a surface-level crossing that has replaced a subway – it is unfortunate that, despite this golden opportunity to make some serious changes, cycling has been almost completely ignored as the roads have been rebuilt.
One of the biggest issues is that the gyratory around the Nova development has been retained. The new buildings still sit in the middle of what is effectively a giant multi-lane roundabout. The problem of trying to negotiate these roads without being diverted around hostile one-way systems remains, to say nothing of the total lack of protected space for cycling.

Buckingham Palace Road, 2017. New buildings, new footway, new trees, new road surface -the same three lanes of one-way motor traffic.
Cycling towards the camera here remains impossible. And when the bus lane is occupied, cycling away from the camera is – while possible – an unpleasant and potentially dangerous experience.

Buckingham Palace Road, summer 2018.

4 metre wide bus lanes aren’t so great for cycling when they’re full of buses.
Much the same is true on Victoria Street, lying between Victoria station and the Nova development. Again, we have 2-3 lanes of one-way motor traffic thundering through here, exactly as before.

Victoria Street, looking east. The Nova development is on the left.
And again, this arrangement make no concession for anyone trying to cycle east (away from the camera).
Worst of all, it introduces a significant collision risk at the junction itself, where I am standing to take the photograph. On the approach to the junction, a wide bus stand narrows down significantly, leaving perhaps a metre of width between the kerb and stationary vehicles as a ‘channel’ through which people can cycle to reach an inviting advanced stop line (ASL). The area in question is indicated with the arrow, below.


Approaching the junction on Victoria Street.
That ASL looks very inviting, but getting there could be very risky indeed. There’s absolutely no guarantee that any large vehicle progressing through the junction will remain a safe distance from the kerb. Three separate examples below, taken within the space of a few minutes.



Anyone cycling up to the lights – forced into a tight merge by the narrowing of the road, and tempted to advance by a cycle lane leading to an ASL, could very easily find themselves squeezed between a lorry, or a bus, and the kerb. If any of these vehicles are turning left, like the National Express coach in the photograph below, the consequences could be lethal.

Someone has already had a very narrow escape here, taken to hospital in a critical condition after going under a left turning lorry at precisely this location.

From Get West London. The lorry is in nearly exactly the same position as the National Express coach in the previous photograph.
This is dreadful design, and it’s shocking that new road layouts this are appearing right in the centre of our capital city, with a blank slate to do so much better.
It may not be apparent from these photographs, but the footway on this corner is now very large indeed – nearly twenty metres wide, at the apex.
This is obviously a very good thing, in its own right. A left-turn slip lane for motor traffic has been removed and replaced with this footway, making the junction far more attractive for anyone walking here. But it seems extraordinary that, simultaneously, so little thought has been given to the safety of people attempting to cycle through here. They are almost literally being thrown under the bus. At a location where the building-to-building width is 30 metres, it is simply unacceptable to squeeze people cycling into a tiny space where they are already ending up under the wheels of HGVs.

Blink and you’ll miss it. The tiny, narrow and dangerous concession to cycling in this enormous space.
How can things be going so wrong with brand-new road layouts? How can we we rebuilding roads with 2-3 lanes of one-way motor traffic, without any apparent thought for cycling?
The distinct, unavoidable impression created from the new roads around Victoria is that it seems sufficient to treat cycling as a mere afterthought once the road layouts and widened pavements have been planned. Once the kerb lines have been defined, all that’s left to do is to add a painted bicycle symbol in a box just behind the stop line, and perhaps a tokenistic line at the side of the road, where there isn’t any parking, or a bus lane. Even if that might make a dangerous junction even more dangerous.
That’s just not good enough. These roads could and should have been rebuilt with protected cycleways, allowing people to travel to and from the Vauxhall area and central London in safety, or from west London towards central London. Instead they are still being put in danger, and cycling here will continue to remain the sole preserve of the fit and brave.
UPDATE
Alex Ingram points out that – in addition to the Transport Initiatives/PJA 2014 report for Transport for London on cycling in Victoria, the Victoria BID also produced a report on public realm in the area in 2015. It has this to say –
Cycling in Victoria can feel dangerous and intimidating. High volumes of traffic on the Inner Ring Road and the associated Victoria gyratory have a significant negative impact on cycling through the area. One-way streets in general are a hindrance to the desire lines of cyclists and create longer and more difficult journeys.
… Cycle routes and safety are key considerations when upgrading streets and spaces … In April 2013 a cyclist was fatally injured during the morning rush hour at the junction between Victoria Street and Palace Street. A number of minor injuries have also occurred at junctions along Victoria Street and more serious cycling injuries around the Buckingham Palace Road-Lower Grosvenor Place junction. Here the fast-flowing multiple lanes of one-way traffic include many coaches and large service vehicles which create significant hazards. Major arteries such as Vauxhall Bridge Road and Grosvenor Place are also accident hotspots.
Doubtless many of you will have seen this video of a ‘near miss’ on the A38 in Bromsgrove, in which a child narrowly escapes serious injury, thanks to the quick reactions of a driver – a fireman, Robert Allen.
I wasn't sure whether to post this but if it stops a child from being killed on the road it's worth it! Today a child rode out in front of me, across the dual track, without looking! Thankful I was driving under the speed limit & reacted quickly! #neverchanceit @BromStandard pic.twitter.com/NuDgFqcdDj
— Robert Allen (@HWfireRAllen) August 5, 2018
I wasn’t the only one to notice that the way this incident was framed – both on social media, and in the media more generally – focussed entirely on human actions. On the one hand, the quick thinking, forward planning and skill of the driver, and on the other, the mistakes and foolishness of the children.
Framed in this context, the only way to prevent near misses (or even serious injuries and fatalities from occurring in future) is to ensure that all drivers are as quick-thinking and careful as this one, and also to ensure that children don’t behave impulsively, and don’t make mistakes and misjudgements.
But unfortunately both of those things are actually very difficult, if not impossible, to achieve. Children, especially younger children, have serious problems judging the speeds of approaching vehicles, due to their difficulty in perceiving visual looming (HT AndyR_UK). On top of that they will inevitably be impulsive, fail to concentrate, or become distracted. Equally, drivers won’t be paying attention 100% of the time. They will also get distracted. They are fallible. They will not all be as cautious and as quick to react as Robert Allen. Because they are human beings, not robots.
So the only realistic way of preventing these kinds of incidents from happening in the future is to design the danger out of the crossing. We can’t rely on human beings not do stupid things, or to not make mistakes, because it’s not who we are. The only rational response is to minimise the chances of collisions occurring in the first place, and to minimise the severity of those collisions if they do happen. The alternative – attempting to get children to behave properly in the context of this type of crossing – is nothing more than applying the flimsiest of sticking plasters to a gaping wound.
If we look at the location, it’s a little bit ambiguous whether the posted speed limit is 60mph or 70mph, because the crossing is at exactly the point where two lanes (60mph limit) become a four lane dual carriageway (70mph limit).
But whether it’s 60mph or 70mph doesn’t really matter – as Ranty Highwayman observes, either way, these are still very high speeds for children to be processing, especially where drivers will be distracted by the process of merging back down to one lane in the in the oncoming direction, or focused on accelerating up to 70mph as they move into two lanes from one in the facing direction.
On top of that, we have the pedestrian barriers – presumably installed with the intention of stopping people from cycling straight out into the road – acting to steer anyone cycling up to the crossing into a position parallel to the road, where any oncoming motor traffic will be directly behind them. 
Rather than naturally facing that oncoming traffic, children (or anyone else cycling here) will have to look right back over the shoulder to process it. Frankly the entire layout is a recipe for casualties, which the ‘Sign Make It Better’ warning does nothing to fix (not least because it’s only about 50 feet from the actual crossing – not a great deal of help when it comes to alerting drivers of the potential danger).
I’m not sure when this road was built, and the period in which it was thought this was an appropriate type of crossing for a road of this context – but it’s far from unique.
Here's another A38 ped crossing just outside Lichfield The gap in the barrier is where people are supposed to cross pic.twitter.com/3l9jtAt2sK
— Andy (@Bluecrossbar) August 7, 2018
There are several similarly lethal crossings of 70mph dual carriageways in West Sussex, usually the result of existing routes or lanes being severed by the construction of new roads and bypasses, with absolutely no consideration given to the safe passage of people walking and cycling across them. I can think of at least three on Horsham’s northern bypass, which was built in the late 1980s. Below is just one of them.
There’s housing behind the trees on the left hand side of this location, and a railway station a couple of minutes’ walk down the lane to right. It’s not only the danger that is infuriating – it’s the fact that people could be walking and cycling, easily, to and from these locations, but have these horrendous barriers put in their way. The road simply shouldn’t have been built like this – it should have had underpasses integrated into it during construction, to allow people to cross it freely, and in safety.
The Bromsgrove example is perhaps even more pressing, however, as the road is a clearer example of severance – with housing on both sides of the road. If the A38 were a Highways England road, then under the IAN 195/16 standard (which I’ve covered here) a grade-separated crossing would be a mandatory requirement for a 60mph limit. If that’s not possible, then the speed limit should be lowered, the motor traffic lanes should be narrowed significantly, and the crossings should involve clear sightlines, with only one lane crossed at a time. Something like this kind of thing, which I saw on a distributor road in the city of Zwolle.
There are no signals; this is just a simple priority crossing, with cycles having to give way to motor traffic. However, only one lane has to be crossed a time, motor traffic speeds are much lower, and the visibility is excellent, for all parties. This really isn’t rocket science.
If the council wish to retain a 60mph limit, or four lanes of motor traffic, then that obviously means that human beings should be separated entirely from the road, to insulate them from the increased danger that attempts to cross such a road at-grade would involve. An underpass is the obvious answer in that traffic context.

That would plainly be an expensive undertaking, but really there’s no other safe way of addressing the severance posed by a multiple lane road with such a high speed limit.
Obviously I don’t know the local situation, but I suspect it may be much more appropriate to ‘downgrade’ the road to an urban distributor, with single lanes in each direction, separated by a median, and with a much lower speed limit. That would allow the ‘Zwolle’ type of crossing to be employed, and safely. But there are many other places where that is impossible, or at least undesirable. The Horsham example is one location where the traffic volumes and road context – an explicit bypass – should really necessitate grade separation.
To a large extent, we’re reaping the harvest of decades of road-building and planning with little or no thought for the safety and convenience of anyone who wasn’t in a motor vehicle – people cycling and walking along these roads, or attempting to cross them. It’s going to be difficult and costly to undo that damage. Perhaps it’s not surprising, therefore, that we all reach for the superficially easy option of attempting to change human behaviour, rather than changing the system, when we’re confronted with incidents like the one in Bromsgrove.
There’s recently been some silly-season noise about making the use of bells compulsory in our newspaper of record, The Times.
Frothing gibberish on Page 3 of The Times today. Remember when this paper took cycling safety seriously? pic.twitter.com/iemsfa2seJ
— Mark Treasure (@AsEasyAsRiding) July 14, 2018
This story seems to have been based entirely on five written questions from the MP Julian Lewis.
Re @thetimes article about bicycle bells, it is based on 5 parliamentary questions posed by Julian Lewis (MP for New Forest, Conservative) to @transportgovuk .
All 5 receive 1 answer from Transport Minister @Jesse_Norman https://t.co/ijCdWJMTIL pic.twitter.com/Qap6brq38F— always last (@lastnotlost) July 14, 2018
The non-committal response from the Minister has been spun into a story; the Minister in turn dismissed it.
But let’s (charitably) take this seriously, just for a moment. What does ‘mandatory use of bells’ actually mean? Am I supposed to ding every time a pedestrian hoves into view? In a town or a city, my bell would be ringing relentlessly. Let’s also bear in mind that plenty of people will object, often quite aggressively, to the ringing of a bell, interpreting it as akin to the honking of a car horn. A basic starting principle – before any of this nonsense ever gets anywhere near legislation – would have to involve getting some basic agreement and consensus about what people actually want and expect, when it comes to a form of audible cycling warning (or even whether they want people making a noise at all). It you can’t get the general public to agree, which I would imagine is more than likely, then there’s no point even embarking on legislation in the first place.
In any case, the general issue of bells, warnings and ‘silent rogue cyclists’ is symptomatic of basic design failure. I’ve probably cycled at least 500 miles in the Netherlands over the last five or six years. Not a huge amount, but enough to get a good flavour of the country. In all that distance – in cities, in towns, through villages, across the countryside – I can’t honestly remember ever having to ring my bell to warn someone walking that I was approaching. Not once.
A large part of that is probably down to the fact that people walking in the Netherlands are – understandably – fully aware that they will encounter someone cycling quite frequently. In general, it’s unwise to assume that, just because you can’t hear anything approaching, nothing is approaching – and this is especially true in the Netherlands. Being aware of cycling is just an ordinary part of day-to-day life, because everyone cycles themselves, and because they will also encounter cycling extremely frequently.
However, I suspect my lack of bell use is also due to the fact I rarely ever come into conflict with pedestrians, because of the way cycling is designed for. Unlike in Britain, where walking and cycling are all too frequently bodged together on the same inadequate paths, cycling is treated as a serious mode of transport, with its own space, distinct from walking.

No need for bells here, to warn people you are approaching
I don’t need to ring my bell to tell someone walking I am coming up behind them because we’re not having to share the same (inadequate) space. There are of course many situations in the Netherlands where walking and cycling are not given separate space – a typical example below.

However, these will almost always be situations where the numbers of people cycling, and of people walking in particular, will be relatively low. In practice, these paths function as miniature roads, marked with centre lines, and used by low amounts of low-impact traffic. Pedestrians treat them as such, walking at the sides, and the dynamics of path use are obvious and well-understood. If demand for these paths increases, such that people walking and cycling begin to get in each other’s way, separation of the two modes becomes a necessity. It is all blissfully rational.
Contrast that with Britain, where ‘cycle routes’ will often be nothing more than putting up blue signs to allow cycling on existing – often quite busy – footways.

It isn’t hard to see why people walking will not be expecting cycling in these kinds of environments. It looks like a footway; feels like a footway. It is a footway. So users who are cycling then have to decide how best they approach someone from behind.
- Do they ring their bell?
- Do they try to make a noise with their bike?
- Do they call out? And if so, what do they say?
- Do they try to glide past without any noise at all?
Bear in mind that there is absolutely no consensus on which of these techniques is preferable to people walking. Some people hate bells, because they think it implies they are being told to get out of the way. Some people don’t like noises, or calls, and apparently prefer the clarity of a bell, and what it signifies. Some other people might be deaf.
As you cycle up behind someone, there is obviously no way of knowing how that particular person will react, and what they will prefer. It is entirely guesswork.
My own technique is usually to approach, slow down a bit, and hope that the person gradually becomes aware of my presence. If they don’t, then I usually say ‘excuse me’. My bell is reserved for occasions when someone is stepping into the road without looking, or similar situations where I can foresee a potential collision occurring.
A short snippet below, on a path I use on a daily basis.
I can see that the two girls are aware that I am approaching, but I slow down in any case, until I am sure. The woman is not aware, so again I slow down, and have to use a verbal ‘excuse me’ to let her know I am there.
This probably isn’t perfect. Maybe there is a better way. But really, I don’t think there even is a ‘perfect’ way of dealing with these kinds of minor conflicts. They are all flawed. You are going to startle someone; you are going to do the wrong thing without even realising it; you are going to annoy someone. It’s unavoidable.
But the solution to this problem is not MOAR BELLS or MANDATORY USE OF MOAR BELL. The basic issue here is crap cycling and walking environments. Every single location where people are being expected to use bells (or some other form of audible warning) will be one where cycling is not expected; where someone cycling is having to share the same space as someone walking; where there is not enough width for the two modes to peacefully co-exist. Bells are not the solution to this problem. Better design is.
The path in my video above is only a couple of metres wide, and has to accommodate cycling and walking in the same space. That’s just a straightforward recipe for conflict. If you think the answer to that conflict is bell legislation then you don’t care about cycling, and frankly you don’t really care about walking either. I don’t want to be cycling at little more than walking pace, having to ring my bell every few metres. I doubt people walking want to be having to deal with that either. I certainly don’t when I am on foot.
Let’s stop dribbling on about bells and instead ensure that our walking and cycling environments work for both modes, with clarity about where people should be and about expected behaviour, and with comfortable space for everyone.

On a sunny September day last year, I headed out from the city centre of Utrecht to take a look at the town of Bilthoven, about five miles away. Despite being a fairly small settlement (Bilthoven itself only has a population of around 20,000 people, although it closely adjoins the larger town of De Bilt) the area around the station has been extensively redeveloped, with the road diverted, and a new underpass built that only allows walking and cycling.
This is the railway station for a town of just over 20,000 people. pic.twitter.com/g1P0lG9vEi
— Mark Treasure (@AsEasyAsRiding) September 19, 2019
I had arranged to meet my partner back in Utrecht city centre at midday (she wasn’t quite as interested in me in pedalling around ten miles to go and look at some cycle paths and new development!), and I was running a bit late, stopping off to take videos of roundabouts and cycle paths on the way.
Cycle path running parallel to main road, on the approach to Bilthoven.
The path is actually built in the woods, some distance from the road. It has its own street lighting. It then crosses a bridge, before skirting a roundabout (with priority). Just brilliant. pic.twitter.com/tlOPg244DU— Mark Treasure (@AsEasyAsRiding) September 20, 2019
Fortunately, on my route back, I stopped at a red light in de Bilt behind a lady on an e-bike, with distinctive bright green panniers.
I say ‘fortunately’ because over my years cycling in the Netherlands, I’ve found people on e-bikes are a real bonus when you are cycling along on a human-powered bike. You can tuck into their slipstream and roll along at a fairly steady 15mph, a speed which would take some effort to sustain on a heavy utility bike if you are cycling on your own. The couple in the photograph below, pedalling along effortlessly on their e-bikes, were an absolute godsend on a baking hot day when I was struggling towards Delft along a dead straight and seemingly unending road.

There’s no need to feel guilty about drafting either, because the person on the e-bike is expending very little effort acting while they act as your personal windbreak.
On my way back to Utrecht, it turned out that I managed to get a very pleasant tow all the way back to the city centre. Here we are, just leaving De Bilt, on the quiet road that runs through the town centre (the main through-road is behind the hedge to the left).

Then along the service road that runs parallel to the main road. (Note the induction loops built into the asphalt, that almost always ensure you have a green signal as you approach these minor junctions).
And then under the famous ‘Bear pit’ junction on the outskirts of the city centre.
Now on Biltstraat, heading into the centre of Utrecht, on a protected cycleway. The road here is only one-way for private motor traffic, with a two-way bus lane off to the left.

Soon after this, the lady gets away from me a bit because… she jumped a red light – a pretty minor violation given the traffic context, but I didn’t want to risk it myself.
She didn’t gain much advantage from her light jump, however, because soon enough I caught her up again, right into the city centre, along the busiest cycle path in the Netherlands (if not in the world) –

Then through the large underpass that takes you straight under the sixteen platforms of Utrecht railway station –
Before our journeys separate, and she peels off to join the cycle parking at the station itself – presumably to catch a train.

We travelled 6km together, nearly four miles, and according to my GPS track we did it in a little over 14 minutes, right from the eastern side of de Bilt, to the railway station in the city centre of Utrecht.

Perhaps unsurprisingly, this works out at an average speed of a fraction over fifteen miles an hour, or 25kph, the legal limit for e-bikes. We only had to stop twice – at the red light I stopped for, and she didn’t, and then to cross Vredenburg.
Given the point at which we met, she had almost certainly come from even further away, either from Bilthoven, where I started, or from Zeist. These are both around five miles away, but at e-bike speeds, only twenty minutes or so from the centre of Utrecht. Indeed, 5 miles can be covered in exactly 20 minutes on an e-bike, which means that a huge area is within a negligible cycling time of both the city centre and the train station – pretty much the whole Utrecht agglomeration (well over half a million people), as well as other outlying towns and villages.

A five mile radius circle, centred on Utrecht train station. At e-bike speeds, it would take just twenty minutes to cover this distance.
That’s pretty remarkable when you think about it – that all these people are (potentially) less than twenty minutes away from the centre of the city, and able to get there in comfort and safety, with minimal inconvenience, just by pedalling, and without any need for a car. When you consider that these trips can then be made in combination with the Netherlands’ excellent rail network, vast swathes of the country are quickly within reach to people even in apparently ‘remote’ areas. Hypothetically, the lady I was following could have come from a remote village miles away from Utrecht, caught a train to Rotterdam, and have gone from door-to-door in less than an hour.
Naturally, these benefits accrue to people on ‘ordinary’ bikes too – people like me – although 5 miles is more likely to take 30 minutes or more to cover, under our own steam. That’s an entirely manageable distance, but e-bikes will obviously reduce the effort required. It has the potential to seriously start eating into those longer car trips that are not quite so appealing by bike.
In the UK we can only dream of having the kind of comprehensive, high-quality cycling networks that surround places like Utrecht, and that enable the types of journeys described here. Where I live, we have absolutely no meaningful cycle network to speak of, and a negligible rail network. Even our former railway lines – which could serve as excellent, flat connections between towns and villages, either by bike or e-bike – have been allowed to disintegrate into bumpy, muddy bogs that are unattractive and unpleasant to use.

A section of the former Horsham-Shoreham railway line, north of Partridge Green
Meanwhile our roads are increasingly clogged with dangerous, polluting and environmentally damaging motor vehicles, travelling in the vast majority of cases short trips that could be converted to other modes.
There is no reason why we can’t match the Netherlands. E-bikes erode the argument that it is too hilly, or too difficult, to cycle, and in combination with high quality cycle networks we could have people effortlessly travelling the kinds of distances described in this post.
As with many British towns in the wake of the 1963 Traffic in Towns report, Horsham responded to the coming age of the motor car with a mixture of enlightenment and destructiveness. In doing so, it largely reflected the nature of the Report itself, which presciently diagnosed the enormous problems mass motoring would present, but offered damaging remedies that essentially accommodated ever-expanding demand for driving right in the heart of our towns, alongside a more benign banishment of it from limited areas within them.
In Horsham, that destructiveness involved the construction, in several stages, of a four-lane inner ring road that now encircles most of the town centre, and the construction of several large multi-storey car parks to accommodate increasing numbers of private cars.

The red line indicates the approximate route of the four lane inner ring road, over the previous street pattern. Original map here.
Although that ring road was (and remains) a blight on the town, the area within it has fared rather better, with a fairly deliberate policy of either complete removal of motor traffic, or minimising its levels. Through-traffic is discouraged by means of a 20mph zone (one of the first in the country) combined with a winding, circuitous route through the town centre, while many other streets have been either fully pedestrianised, or part-pedestrianised.
While these changes within the ring road are largely to be applauded, the enlightened planners and councillors who implemented them sadly neglected to consider cycling in any way, shape or form. One of the biggest issues is that the one-way flow through the centre, while successful at keeping motor traffic on the inner ring road in an east-to-west and north-to-south direction, also completely excludes cycling. I’ve previously written about this specific issue here.
Another longstanding problem for cycling lies to the western edge of the town centre. Here the former main north-south road across the town (shown in green and blue in the overhead view below) has been bypassed to the west by the four lane inner ring road (in red), leaving short sections of road with a pedestrianised area in the middle (highlighted in green), that still allows cycling in a north-south direction, but in a very half-hearted and ambiguous way. In other words, it’s not at all clear that it’s legal to cycle there.

This is actually a fairly important area for cycle journeys, because as well as potentially allowing you to cycle in a north-south direction avoiding the unpleasant, fast and busy four lane inner ring road (which naturally makes no concessions to cycling at all), it should also allow journeys in an east-west direction – particularly, people coming from the north and the west to enter the town centre. All these potential routes are shown on the overhead view below.

The red lines indicate entry and exit points for cycling. To the left is the large inner ring road.
The real difficulty lies at the southern end, where a new bus station was built around twenty years ago. It lies in the middle of the red ring, above. The building itself is attractive, but once again there was absolutely no consideration of cycling when it was planned (are you sensing a pattern here?).
The area where buses arrive and depart is buses-only – so the area ringed in green, below, is a no-go area for cycling.

That means all the movements through this area have to pass through the gap between this green area and the building on the corner, which is at present a pedestrian crossing, connecting the pedestrianised area with the bus station. This is a very awkward fit for cycling.
The video below shows me cycling along the line of the red arrow. This is at a particularly quiet time of day, early in the morning, so it is free of the potential conflict with people walking to and from the bus station.
It’s not even clear to me how legal this is. I take the option of crossing into the bus station and then moving across the solid stop line (the lights will only change for buses, so jumping the lights is unavoidable). The alternative is to cycle onto the pedestrian crossing, but that doesn’t seem particularly appealing either.
Short of rebuilding the bus station and starting all over again from scratch, to my mind there are no obvious fixes here to formalise cycling through this area. Perhaps a short term bodge is simply to convert the pedestrian crossing into a toucan that is at least legal to cycle onto, but then you are left with the inelegant solution of cycling off of it to join the road where the heads of the red arrows are located. Furthermore this toucan crossing would not help with cycling in the opposite direction, where people have to cycle (the wrong way!) into the bus station entrance from a signalised road junction, and then somehow ‘merge’ onto a toucan crossing which may well have people walking on it.

To demonstrate, here is another video of me on this desire line, cycling from the east, then heading north, along the line of the upper red arrow. Currently I take the approach of cycling onto the footway before the red light, to avoid conflicts with the pedestrian crossing. Although cycling in the pedestrianized area is legal, it probably isn’t on this bit of footway. But I’m not sure what else to do.
For pure north-south cycling journeys, the most obvious option is some kind of route running down the western edge of the bus station. There is a new-ish hedge that could potentially be sacrificed, and some parking bays that are occasionally used by service vehicles from the bus companies.
Here is a family walking south down the footway along the western edge of the bus station, with the hedge and the parking bay to their left.

This would solve these purely north-south journeys. However, it wouldn’t do anything to address the most of the journeys across the area, which will involve some east- or west-component, and therefore will involve the difficulties shown in my videos.
Indeed, the junctions around the bus station are an almost perfect case-study in how people cycling are turned into lawbreakers (or at least flexible rule-benders) because nobody has given any thought into how people would actually cycle through the area.
From the east, the ‘least worst’ option is to cycle on a short bit of footway (which may or may not be legal), and from the north the ‘least worst’ option is either to cycle onto a pedestrian crossing, or to cycle through a red light designed only for buses.
It’s a mess. And without a total redevelopment of the area, I’m not sure how it can be substantially improved. But any thoughts on how it might be done would be welcome! This area is important, as it is right in the town centre, and dealing with how to cycle across it in at least a legal manner needs to be solved.


Neal Ascherson
Democratic Left Scotland, n.d. (2018)
This is an odd pamphlet which is well worth getting. Some day it'll be a collector's item. It's well-produced on glossy paper, with a striking cover and, inside, a fine reproduction of the portrait whose gift and sitter the pamphlet celebrates. In these pages three big names meet: the author Neal Ascherson, the subject Tom Nairn, and the painter, Sandy Moffat. It has already been reviewed, briefly and enthusiastically by Davie Laing, and lengthily and discursively by Rory Scothorne. There's no need for me to review it here, inevitable quibbles though I may have - I can only recommend it, as a small piece of history, and a useful summary of an argument that is still influencing that history.
The title, apt as the pun on 'painting' no doubt was for the occasion, does less than justice to the content: a concise intellectual biography of Nairn by the journalist who did a great deal to make his ideas part of common sense. Ascherson saw Scotland in an international context provided by his own wide-ranging life; Nairn's intellectual formation was likewise cosmopolitan; and for both Scotland was key to dismantling the 'archaic' structures of the British state.
The pamphlet can be obtained by sending a cheque for £4 (10% discount for orders of 10 or more) to:
Democratic Left Scotland,
9 MacAulay Street, Dundee DD3 6JT
If the archaic structures of 'cheque' and 'post' are too constraining, you can always enquire of the publisher by telephony and the interwebs:
Telephone 07826 488492
Email stuartfairweather [at] ymail [dot] com
Scott Hames
Edinburgh University Press, 2020
How well I remember Scotland in the 1980s! Scunnered by the failure of even a majority vote to establish a Scottish Assembly, snookered by the Cunningham Amendment, gubbed in the first round of the World Cup, gutted and filleted by Thatcherism, disillusioned by repeatedly voting Labour and getting Tory ... only the writers and artists remained standing, to produce a body of self-confident work that firmly established the nation on the global cultural map. Together with dedicated political and civic activists they in due course lifted its spirits to the heights of gaining its own Parliament. They accomplished a devolution - or independence -- of the mind and heart, well in advance of its political achievement.
I remember it like this, of course, because I wasn't there. I was in London, reading all about it in the columns of Neal Ascherson and the volumes of Tom Nairn. Now and again I'd browse a journal or pamphlet from the Scottish literary or political edge. Scotland from afar seemed to have a more democratic, more socialist and more egalitarian spirit than England - particularly the South- East of England - and this consciousness showed through in the culture it had inherited as much as in the culture it now produced. And since I moved back in the early 1990s, the same story has become received wisdom, not least among writers and artists.
According to Scott Hames's new book, the real story is a bit more complicated than that. So much more complicated, indeed, that it's hard to summarise. If he's missed a magazine, a literary feud, a Commission or a Report, it's not for want of looking. The discussion is sometimes dry, the narrative always engaging. The savagery of the spats he disinters from the archival peat-bog is eye-opening. Hames contrasts 'The Dream' of literary nationalism with 'The Grind' of political procedure (characterised in this case more by friction than motion). Two features stand out, all the more because in retrospect they're often overlooked. The first is how radical an aim devolution seemed, and how bright it shone in the literary imagination. The second is how conservative - how conserving -- a manoeuvre its implementation was, driven far more by the need of the British state and the Labour Party to 'manage national feeling' than by the SNP, whose votes were read as fever-chart symptom rather than political challenge.
In focusing on the cultural and the political, Hames avowedly and explicitly omits the economic and the social. This is fair enough in its own terms, but it's liable to leave the reader's inner vulgar Marxist - if they, like me, they have one -- sputtering. The oversight, if we can call it that, is overcompensated in the novel whose analysis gets a chapter to itself: James Robertson's And the Land Lay Still. It's the most ambitious Scottish realist novel for decades, grand in scale and scope and an immersive read. Ranging from the late 1940s to the early 21st Century, the novel interweaves family sagas and stories of personal individuation with political and parapolitical history to tell one overarching epic: the growth of national consciousness.
And therein lies one problem with it. It's as if at the back of every honest, decent Scot's mind is a relentless yammer of 'You Yes Yet?' Older generations are permitted to die in the old dispensation, shriven by their invincible ignorance, but those who live in the light of the new have no excuse. If they step off the path to nationhood they sink in the slough of self-loathing - as the two major pro-Union characters, an alcoholic police spy and a Tory MP undone by a secret fetish, in the end do.
Robertson conducts a large and varied cast through a long time and a complex plot with great skill to a most satisfactory click of closure. But, Hames argues, the difficulty of integrating the characters' lives with a political history that mostly consisted of tiny conventicles and ceilidhs in literally smoke-filled rooms and debates in widely unread periodicals, and that now and then took public form as 'set-piece' events in parliaments and streets, can defeat even the best novelist - even though Robertson was himself on those marches and in those rooms. It's a problem familiar in science fiction: one reviewer cited refers to Robertson's 'info-dumping', a term from the lexicon of SF criticism.
Hames's final chapters deal with Scots, the language, in relation to Scots, the people - and 'people' too is ambiguous, referring as it can to the nation as a whole or to 'the people' as opposed to the elite. Here Scots is an abrasion almost as raw as Gaelic, and more widely felt. At the risk of rubbing it, here's how it went. Centuries ago, Scots was an official language, known as Inglis. It was used at Court and in courts, in poetry and prose. For readers outwith Scotland, you wrote in Latin like any other literate European. After the Union of the Crowns and the Treaty of Union, the United Kingdom conquered a third of the world, and English replaced Latin as the de facto lingua franca. Scots was pushed out of administrative, then everyday upper-and-middle-class speech. Its several dialects became the language of the working poor of town and country. (Except in the Highlands, where the people were schooled and regimented straight from Gaelic into Standard English, which of course they spoke in their own distinct way.)
In the first Scottish literary renaissance, MacDiarmid and others sought to revive Scots as a national language, which they called Lallans or (because of the fusion of Scots dialects) 'Synthetic Scots'. This produced some great poems, but in polemic and reportage it can come across as as affectation. Fights between the Lallans-scrievin old guard and the younger, more outward-facing literary intelligentsia flared in the 1960s and 1970s. But some new writers found another aspect of the language question, and one that far from being esoteric was central to everyday life, at least in the Central Belt. Modern vernacular Scots is different enough even from Scottish Standard English to separate the home and the school, the working class and the middle class. That difference could literally hurt, could smart and bruise, from the classroom tawse and the playground clout. At the same time, and very much as part of what Nairn excoriated as the conservative 'tartanry' of the proud Scot, the Scots language appeared in print as a quaint rustic dialect, in English spelling spattered with apostrophes, from Burns Night to the Broons patronised to within an inch of its life.
Now here I do remember personally, from the 1970s. Seeing for the first time urban West of Scotland demotic speech rendered phonetically in print, in Lament for a Lost Dinner Ticket by Margaret Hamilton, and 'Six Glasgow Poems' in Tom Leonard's Poems (1973), was a mental liberation. Almost as much, for me, as seeing for the first time Highland English dialogue accurately conveyed, in the children's novels of Allan Campbell McLean. The release came from not being patronised or mocked. Only that!
Not a lot to ask, you might think, but this modest request was seldom met with comprehension, let alone satisfaction. As an issue with which to elide class hurt with national grievance, it packed a wallop. But only, or mainly, on the individual level. In a country and at a time where upward social mobility is closely connected with further education and a change in language, the typical agonies of the intellectual of working-class origin growing away from their roots and the socialist of middle-class origin separated by accent and vocabulary from the class they most wish to speak to (or, problematically, for) are widespread enough to make these private pains a social force.
Scotland's peculiar development, however, has meant that Scots has very little chance of becoming the national language. Stranger things have happened, but... Naw. More likely, and well under way, is its official celebration as one of several languages spoken in Scotland. This provides gainful employment to some, and bewilderment to schoolchildren who speak what they think is English, but which they are now taught (in English) is another language, Scots.
As Hames suggests, this linguistic and social devolution within the devolved polity serves to defuse any class and national charge that spoken Scots still has, and offers its speakers symbolic representation in the place of - or at least, quite independently of - any actual power. The identity politics of a section of the working class is assimilated to the identity politics of the nation, to which its characteristic manner of speech is supposed to lend authentic voice. What this contributes to the material condition, let alone the social and political self-confidence, of the working class within Scotland is another matter entirely. All those years after Trainspotting, it's still shite being Scottish.
Like the devolution settlement as a whole, this uneasy arrangement leaves a lot of unfinished business. Looking back on the Scottish 1980s that I saw only from a safe distance, I have a wry suspicion that somebody was running a Gramscian strategy through those smoke-filled rooms. That self-effacing Modern Prince has yet to have their share of glory. Be that as it may, the smoke-free Scotland of 2020 cries out for an analysis of likewise Gramscian canniness. Scott Hames's book is avowedly not it, but points towards that, and beyond to an unknown 'utopian' future wherein we speak for ourselves.
Views: 73
For years — actually for decades — those of us in the Computer Science community who study election systems have with almost total unanimity warned against the rise of electronic voting, Internet voting, and more recently smartphone/app-based voting systems. I and my colleagues have written and spoken on this topic many times. Has anyone really been listening? Apparently very few!
We have pointed out repeatedly the fundamental problems that render high-tech election systems untrustworthy — much as “backdoors” to strong encryption systems are flawed at foundational levels.
Without a rigorous “paper trail” to backup electronic votes, knowing for sure when an election has been hacked is technically impossible. Even with a paper trail, getting authorities to use it can be enormously challenging. Hacking contests against proposed e-voting systems are generally of little value, since the most dangerous attackers won’t participate in those — they’ll wait for the real elections to do their undetectable damage!
Of course it doesn’t help when the underlying voting models are just this side of insane. Iowa’s caucuses have become a confused mess on every level. Caucuses throughout the U.S. should have been abandoned years ago. They disenfranchise large segments of the voting population who don’t have the ability to spend so much time engaged in a process that can take hours rather than a few minutes to cast their votes. Not only should the Democratic party have eliminated caucuses, it should no longer permit tiny states whose demographics are wholly unrepresentative of the party — and of the country as a whole — to be so early in the primary process.
In the case of Iowa (and it would have been Nevada too, but they’ve reportedly abandoned plans to use the same flawed app) individual voters weren’t using their smartphones to vote, but caucus locations — almost 1700 of them in Iowa — were supposed to use the app (that melted down) to report their results. And of course the voice phone call system that was designated to be the reporting backup — the way these reports had traditionally been made — collapsed under the strain when the app-based system failed.
Some areas in the U.S. are already experimenting with letting larger and larger numbers of individual voters use their smartphones and apps to vote. It seems so obvious. So simple. They just can’t resist. And they’re driving their elections at 100 miles an hour right toward a massive brick wall.
Imagine — just imagine! — what the reactions would be during a national election if problems like Iowa’s occurred then on a much larger scale, especially given today’s toxic conspiracy theories environment.
It would be a nuclear dumpster fire of unimaginable proportions. The election results would be tied up in courts for days, weeks, months — who knows?
We can’t take that kind of risk. Or if we do, we’re idiots and deserve the disaster that is likely to result.
Make your choice.
–Lauren–
Not that I’m a fan of Trump, but the move to establish a US Space Force caught my attention. There are two points of interest. The lesser one is what apparently happened. Of greater interest to me is how someone could use it in military science fiction, and what it might say about the future of space warfare. And space cadets. What apparently happened, and why the democrats agreed to founding the USSF. Here’s ye olde Wikipedia page on the USSF, and if you dig into the details it’s a bit less revolutionary or boondoggle-y than one might first guess. There were two things going on for the last I-don’t-know-how-many-decades (six decades?). The bigger fight was the perennial one in the US Department of Defense, over which branch of the military got to control which resource. The related fight, within the US Air Force, was how many resources went into their space division, versus resources to pilots and planes.
I got a peanut gallery seat, because in San Diego, right next to the I-5 freeway on the way into downtown, is this huge Navy building prominently labeled “SPAWAR.” I’d always assumed that, in addition to running the Navy’s program on using dolphins and sea lions as patrol animals (among other things), it held some important chunk of the Navy’s space warfare command. And it did, until June 2019, when it pivoted into Informational Warfare. Long story short, the fight over whether each service had its own space arm, or whether one service got to bogart the satellites (so to speak) was finally settled (for now!) in favor of the US Air Force becoming the US Air and Space Force in all but name.
Trouble is, the satellite intelligence Force of the USAF was apparently not getting sufficiently funded, presumably because jet pilots are cool while others drool, and more importantly, because fraternal funding battles are where the echelons above reality get their combat experience and promotions. Anyway, long story short, the funding and independence battle between air pilots and space cadets had been going on for decades, and in 2019 the solution (first proposed in the early 2000s) was to split of the space wing of the USAF into a semi-separate US Space Force which would run the military’s space efforts. However, the USSF is under the Secretary of the Air Force, just as the US Marine Corps is under the Secretary of the Navy. So the USSF is a separate force, but not very separate just yet. I suspect the cadets in Colorado Springs who go in for the new BS in Space Operations are going to get heartily sick of the “space cadet” label.
So that’s the US Space Force. It’s not boots in the sky just yet (or weapons deployed in space). Right now it’s about flying satellites and doing things with them. This IS a critical part of the US military, regardless of whether there are weapons up there or not, so I’m actually okay with them being their own force.
The fun part is going forward, what this means for science fiction, specifically the military culture of space. To begin with, although I only watched a few episodes of Stargate, I’m perfectly aware that the USAF already has been represented in milSF quite successfully (for those who don’t know, the Stargate of the series was run by the US Air Force, who apparently cheerfully cooperated with the filming of the series). While I’m not a military SF expert, my major exposure has been to a certain Honor Harrington, whose military is based rather more on the British Royal Navy of centuries past. Basing a military SF story on the memes swiped modern USSF will be *extremely* different than something aping the Honorverse, possibly in useful ways.
Let’s start with the parameters of space warfare. Assuming interstellar spaceships are possible, and especially assuming FTL is possible, how do you shoot at a spaceship? It’s moving far too rapidly for a human to perceive, and probably far too small to see due to extreme distances. Star Wars and kin notwithstanding. bodies in space normally move 1-2 orders of magnitude faster than a bullet, so in real life, you don’t hit them by firing guns at them. At best we’re looking at machines firing lasers or missiles at each other and maybe occasionally hitting, sort of like WW 1 torpedoes. The idea of Cpl Luke or MSgt Han swinging a gun, acquiring a target, and hitting with the shot is orders of magnitude too slow.
Anyway, that’s space warfare interpreted as conventional warfare, 20th Century style. And it works in stories. David Weber, to his great credit, made a lot of fun and profit in the Honorverse, through making space naval broadsides cool again.
However, we’re in the 21st Century, and hybrid warfare is the thing these days, rather than battleships or even aircraft carriers. Can you destroy a starship with physical sabotage, cyber warfare, or social hacking? Why yes, yes you can. The speed of the spaceship is not only irrelevant to such attacks, it actually makes sabotage more dangerous and harder to detect in the resulting debris field. And not so oddly, the USSF strongly appears to be a hybrid warfare force. It’s lack of guns in space may be completely irrelevant to its legitimacy or even its deadliness.
If someone wants to do military SF about interstellar warfare now, it is, yes, possible, to unlimber the gimbal-mounted laser cannon and go pew pew pew, as has been done for over 40 years. Or you can create a universe where starships in full flight are moving too fast to hit with anything except maybe an exceptionally lucky shot with a laser, or perhaps a really well guided, really expensive rocket (and even then, the chance of a hit is fairly abysmal, considering how much the munition costs to launch). Therefore, if you want to wage successful warfare against an enemy starship in flight, you attack when they’re in orbit around planets (moving slower in predictable paths) or on the ground. Or you attempt to hack them, boobytrap the information going into, and hack the social networks running the ship, using every weakness you can find about the crew. Countering such attacks, as we know now, is hard. It also can make for interesting storytelling. Is someone aboard ship a mole, a kamikaze saboteur, or just cracking slightly faster than everyone else under an unending bombardment of psychological warfare? Therein lies part of a story.
There are parallels in older science fiction. Here, I’m thinking particularly of James Schmitz and his psionics. I suspect one can draw memes from Schmitz’s psionics stories and repurpose them to our emergent AI era, where you can use big data and machine learning to get inside someone’s skull almost as effectively as a budding telepath could. Perhaps not so oddly, Schmitz served during WW2 in the US Army Air Corps, the predecessor to the USAF.
Then there’s the whole military culture thing. I’m not a veteran, but I do like to read, and one of the books I’ve reread several times is Carl Builder’s 1989 The Masks of War: American Military Style in Strategy and Analysis. It’s obviously a bit obsolete, but it’s still relevant, because it talks about the different ways the Navy, Army, and Air Force go about dealing with reality. For universe building it’s a worthwhile read, combined of course with other, more modern sources (I’d suggest Chris Hadfield’s An Astronaut’s Guide to Life On Earth for one)
For example, SF traditionally has space admirals, because space ships are independent commands like ships, and…But that’s not how the Generals of the Space Force would work, if you believe Masks of War. They’re less about crusty tradition, and far more about technological superiority, creating doctrine to implement long-term strategies, and using technical analyses to inform their opinions (I don’t think they’ll ever just trust their feelings, whether the Force is strong in them or not). If this sounds like corporate America, Builder noted the similarity.
As an example of the cultural difference, US naval aviators identify as Navy officers first, pilots second, while an Air Force aviators identify themselves by the kinds of planes they fly. It’s a different mindset, and it leads to a different culture of warfare. Again, you can see this a bit in Schmitz’s writing, where battles are often less about naval engagements in space, and more about quick shoot-outs using highest tech guns, with a large side order of skullduggery.
Incidentally, that skullduggery has historical roots. If I remember correctly, CIA officers who were required to be military officers often got themselves commissioned in the Air Force rather than the other services, and there’s currently a big overlap between the US Space Force and the “civilian” (hah!) National Reconnaissance Office, which does US satellite espionage. The USAF and the black world of clandestine military activity have been closely associated for a very long time.
I could certainly go on, but if you’re interested in writing military SF, or even writing SF stories about interstellar flight, it’s worth taking a long, even sidelong look, at this new US Space Force and seeing whether the difference sparks your creativity. Yes, you can still go from windjammers to sunjammers if you must (with space marines doing drops instead of landing on beaches! And crusty admiralty politics among the Lords of Space!). But if you want to be new and different, maybe get into mind-hacking in space and starship sabotage, and see where that leads you. If you’re writing your ToE chart, instead of having the captain of the starship reporting to a commodore or the space admiralty, you might, alternatively have the captain of a flight (in USAF terminology, about 100 people, or 3-4 craft, perhaps a starship and attached drone crews) reporting to a lieutenant colonel running the squadron, who in turn reports to the colonel running the wing (in increasing size), who in turn reports to a general running the numbered space force . And that’s just a trivial example.
Yes, yes, I know the USSF is really just a boondoggle. Nothing here to see at all. Whatever. I figure it’s grist for the mill, and if it doesn’t fight, maybe it will still inspire something fun to read.
What did I miss?
By now, you have almost certainly heard of the dark web. On sites unlisted by any search engine, in forums that cannot be accessed without special passwords or protocols, criminals and terrorists meet to discuss conspiracy theories and trade child pornography.
We have reported before on the dark web's "hurtcore" communities, its human trafficking markets, its rent-a-hitman websites. We have explored the challenges the dark web presents to regulators, the rise of dark web revenge porn, and the frightening size of the dark web gun trade. We have kept you informed about that one dark web forum where you can make like Walter White and learn how to manufacture your own drugs, and also about—thanks to our foreign correspondent—the Chinese dark web. We have even attempted to catalog every single location on the dark web. Our coverage of the dark web has been nothing if not comprehensive.
But I wanted to go deeper.
We know that below the surface web is the deep web, and below the deep web is the dark web. It stands to reason that below the dark web there should be a deeper, darker web.
A month ago, I set out to find it. Unsure where to start, I made a post on Reddit, a website frequented primarily by cosplayers and computer enthusiasts. I asked for a guide, a Styx ferryman to bear me across to the mythical underworld I sought to visit.
Only minutes after I made my post, I received a private message. "If you want to see it, I'll take you there," wrote Reddit user FingerMyKumquat. "But I'll warn you just once—it's not pretty to see."
Getting AccessThis would not be like visiting Amazon to shop for toilet paper. I could not just enter an address into the address bar of my browser and hit go. In fact, as my Charon informed me, where we were going, there are no addresses. At least, no web addresses.
But where exactly were we going? The answer: Back in time. The deepest layer of the internet is also the oldest. Down at this deepest layer exists a secret society of "bulletin board systems," a network of underground meetinghouses that in some cases have been in continuous operation since the 1980s—since before Facebook, before Google, before even stupidvideos.com.
To begin, I needed to download software that could handle the ancient protocols used to connect to the meetinghouses. I was told that bulletin board systems today use an obsolete military protocol called Telnet. Once upon a time, though, they operated over the phone lines. To connect to a system back then you had to dial its phone number.
The software I needed was called SyncTerm. It was not available on the App Store. In order to install it, I had to compile it. This is a major barrier to entry, I am told, even to veteran computer programmers.
When I had finally installed SyncTerm, my guide said he needed to populate my directory. I asked what that was a euphemism for, but was told it was not a euphemism. Down this far, there are no search engines, so you can only visit the bulletin board systems you know how to contact. My directory was the list of bulletin board systems I would be able to contact. My guide set me up with just seven, which he said would be more than enough.
More than enough for what, I wondered. Was I really prepared to go deeper than the dark web? Was I ready to look through this window into the black abyss of the human soul?
The vivid blue interface of SyncTerm. My directory of BBSes on the left.
I decided first to visit the bulletin board system called "Heatwave," which I imagined must be a hangout for global warming survivalists. I "dialed" in. The next thing I knew, I was being asked if I wanted to create a user account. I had to be careful to pick an alias that would be inconspicuous in this sub-basement of the internet. I considered "DonPablo," and "z3r0day," but finally chose "ripper"—a name I could remember because it is also the name of my great-aunt Meredith's Shih Tzu. I was then asked where I was dialing from; I decided "xxx" was the right amount of enigmatic.
And then—I was in. Curtains of fire rolled down my screen and dispersed, revealing the main menu of the Heatwave bulletin board system.
The main menu of the Heatwave BBS.
I had been told that even in the glory days of bulletin board systems, before the rise of the world wide web, a large system would only have several hundred users or so. Many systems were more exclusive, and most served only users in a single telephone area code. But how many users dialed the "Heatwave" today? There was a main menu option that read "(L)ast Few Callers," so I hit "L" on my keyboard.
My screen slowly filled with a large table, listing all of the system's "callers" over the last few days. Who were these shadowy outcasts, these expert hackers, these denizens of the digital demimonde? My eyes scanned down the list, and what I saw at first confused me: There was a "Dan," calling from St. Louis, MO. There was also a "Greg Miller," calling from Portland, OR. Another caller claimed he was "George" calling from Campellsburg, KY. Most of the entries were like that.
It was a joke, of course. A meme, a troll. It was normcore fashion in noms de guerre. These were thrill-seeking Palo Alto adolescents on Adderall making fun of the surface web. They weren't fooling me.
I wanted to know what they talked about with each other. What cryptic colloquies took place here, so far from public scrutiny? My index finger, with ever so slight a tremble, hit "M" for "(M)essage Areas."
Here, I was presented with a choice. I could enter the area reserved for discussions about "T-99 and Geneve," which I did not dare do, not knowing what that could possibly mean. I could also enter the area for discussions about "Other," which seemed like a safe place to start.
The system showed me message after message. There was advice about how to correctly operate a leaf-blower, as well as a protracted debate about the depth of the Strait of Hormuz relative to the draft of an aircraft carrier. I assumed the real messages were further on, and indeed I soon spotted what I was looking for. The user "Kevin" was complaining to other users about the side effects of a drug called Remicade. This was not a drug I had heard of before. Was it some powerful new synthetic stimulant? A cocktail of other recreational drugs? Was it something I could bring with me to impress people at the next VICE holiday party?
I googled it. Remicade is used to treat rheumatoid arthritis and Crohn's disease.
In reply to the original message, there was some further discussion about high resting heart rates and mechanical heart valves. I decided that I had gotten lost and needed to contact FingerMyKumquat. "Finger," I messaged him, "What is this shit I'm looking at here? I want the real stuff. I want blackmail and beheadings. Show me the scum of the earth!"
"Perhaps you're ready for the SpookNet," he wrote back.
SpookNetEach bulletin board system is an island in the television-static ocean of the digital world. Each system's callers are lonely sailors come into port after many a month plying the seas.
But the bulletin board systems are not entirely disconnected. Faint phosphorescent filaments stretch between the islands, links in the special-purpose networks that were constructed—before the widespread availability of the internet—to propagate messages from one system to another.
One such network is the SpookNet. Not every bulletin board system is connected to the SpookNet. To get on, I first had to dial "Reality Check."
The Reality Check BBS.
Once I was in, I navigated my way past the main menu and through the SpookNet gateway. What I saw then was like a catalog index for everything stored in that secret Pentagon warehouse from the end of the X-Files pilot. There were message boards dedicated to UFOs, to cryptography, to paranormal studies, and to "End Times and the Last Days." There was a board for discussing "Truth, Polygraphs, and Serums," and another for discussing "Silencers of Information." Here, surely, I would find something worth writing about in an article for VICE.
I browsed and I browsed. I learned about which UFO documentaries are worth watching on Netflix. I learned that "paper mill" is a derogatory term used in the intelligence community (IC) to describe individuals known for constantly trying to sell "explosive" or "sensitive" documents—as in the sentence, offered as an example by one SpookNet user, "Damn, here comes that paper mill Juan again." I learned that there was an effort afoot to get two-factor authentication working for bulletin board systems.
"These are just a bunch of normal losers," I finally messaged my guide. "Mostly they complain about anti-vaxxers and verses from the Quran. This is just Reddit!"
"Huh," he replied. "When you said 'scum of the earth,' did you mean something else?"
I had one last idea. In their heyday, bulletin board systems were infamous for being where everyone went to download illegal, cracked computer software. An entire subculture evolved, with gangs of software pirates competing to be the first to crack a new release. The first gang to crack the new software would post their "warez" for download along with a custom piece of artwork made using lo-fi ANSI graphics, which served to identify the crack as their own.
I wondered if there were any old warez to be found on the Reality Check BBS. I backed out of the SpookNet gateway and keyed my way to the downloads area. There were many files on offer there, but one in particular caught my attention: a 5.3 megabyte file just called "GREY."
I downloaded it. It was a complete PDF copy of E. L. James' 50 Shades of Grey.
If you enjoyed this post, more like it come out every four weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.
Previously on TwoBitHistory…
I first heard about the FOAF (Friend of a Friend) standard back when I wrote my post about the Semantic Web. I thought it was a really interesting take on social networking and I've wanted to write about it since. Finally got around to it!https://t.co/VNwT8wgH8j
— TwoBitHistory (@TwoBitHistory) January 5, 2020

While some children were born without faces simply because they didn't deserve them (see the Scarfolk Annual 197X), the government became increasingly concerned about citizens who did have them. They found that people with faces are more likely to have personal desires, hopes and dreams, in short: a will and ideas of their own.
Such idiosyncrasies were not only thought of as needlessly self-indulgent, they were also deemed inconsistent with the smooth running of a successful society. Scarfolk's was the first council benevolent enough to offer face removals on the NHS.
In 1976, the council trialled face removals on stray foreigners, prisoners, children nobody wanted, unsuspecting people who were picked up leisurely walking in a park after sundown and volunteers (see leaflet above).
When the full scheme was rolled out in 1977, the council soon lost track of which faceless citizen was which. By 1978 a new law was passed which dictated that all faceless people were required to have a tattoo of their old face over their lost one to make identification easier.
Vance’s Dying Earth Series (1950-1984) is one of the more famous series in fantasy, influential not least by killing off loads of magic users in Dungeons and Dragons with the Vancian “fire and forget” magic system. However much you love or loathe the books, there’s a bunch of stuff Vance got wrong. If an enterprising author wants to play in the far future of Earth/Dying Earth subgenre, given what we know now, it would be quite different than Vance envisioned. And <i>Hot Earth Dreams</i> can help. First, about the Dying planet genre: It’s not just Vance. There’s Burroughs’ Barsoom, Clark Ashton-Smith’s Zothique, Wolfe’s The Book of the New Sun, and many others. Still, Vance’s is probably the best known (possibly after Barsoom), and it’s the one people think of. Sun’s going out, civilization is decaying, magic has replaced science, and morality becomes, erm, more transactional.
What Vance got wrong was the idea of the sun going out like a guttering coal. As we know now, the Sun instead is going to get hotter, ultimately evaporating off Earth’s oceans and making surface life impossible before (probably) ultimately it grows into a red giant and probably swallows our planet. So pale white people wandering around on a shady world is almost certainly not going to happen. But the sun sterilizing Gaia is around a billion years off, and there’s a lot of future between now and then.
Then there’s the whole supercontinent (“Zothique?”) in our future. That may be 150-350 million years hence. Here are four separate scenarios for how that supercontinent might form. Here’s two views of a fifth, and there’s a sixth I can find a link to that’s totally different. Also, there’s good reason to think that we don’t really understand plate tectonics as well as we might, now that we’re getting a better understanding of “Earth’s interior continents” and what happens after continents subduct. Anyway, if you believe in the whole supercontinent cycle thing (tl;dr, supercontinents form every 300-500 million years, details endlessly argued over), we’ve got enough time for another two Pangeas in the next billion years. Just because they’re so alien to us, I’m going to focus on these going forward, but the future could be just about any arrangement, given what we know now.
Here are some things to understand about a supercontinent-ish planet.
- Earth has two modes, icehouse and hothouse. We’re currently in an Icehouse, and the climate change disaster is disastrous only because we’re rapidly and temporarily shoving Gaia into hothouse mode. While Earth has spent about 80% of the last 500 million years or so in hothouse mode, we are children of the ice, and most hominid evolution took place in the context of repeating ice ages. Worse, perhaps, the last time the Earth jumped from icehouse to hothouse was the end of the Carboniferous, so we don’t have an “evolutionary memory” of living species that have gone through this. That’s what makes Hothouse Earth so dangerous to us and especially for global civilization.
- Getting back to Pangea Nextia (a name chosen because it hasn’t been published for a model, unlike Pangea Proxima, P. Ultima, or P. Nova), it’s almost certainly going to be a hothouse, with no polar ice caps, a fairly low temperature gradient from the equator to the poles (meaning hot poles, hotter equator, large dead zones in the ocean deeps, really hot subtropical deserts, few if any everwet equatorial forests, and most of the diversity in the large number of para/sub/dry seasonal tropical forests everywhere from near the equator to around 50o north. Minus the deserts around 30 degrees north where the Hadley Cell comes down (except for islands, mountaintops, etc.).
- Pangea Nextia will have a super-Saharan desert at the same latitude (north or south) as the current one, because that’s the way global climate works. It will also likely have Himalayan-style mountains where continents plowed into each other (The Himalayas are where India plowed into Asia), Andes-style mountains where oceanic crust subducted under the edge of a continent, and Alps/Zagros/Mediterranean/etc mountains where large continents coming together crushed a bunch of ocean and a small continent between them. The cordilleras will be sort of like the stitches on Frankenstein’s assembly scars, and for much the same reason.
Why is this all important? Climate dictates how and where people live, so knowing how supercontinental biomes work helps set the scene. Mountains are water towers, not just from mountain glaciers, but because water percolates into mountains and comes out in mountain springs. This is probably why they’re so important in so many religions. They’re not just places to get closer to god, the waters coming off mountains keep people alive, as well as providing refuges for all sorts of life. Your scenario likely has rivers running from mountains, deserts, and/or monsoonal forests in it. Understand why they’re important, and you’ll understand why people revere, fight over, and take refuge in them. Hint hint.
Speaking of life, what does the future hold?
- The first question is about mass extinction events, and how many the Earth will experience before your Dying Earth scenario starts. We may or may not trigger a mass extinction in the next 50 years (it’ll be submassive if not truly mass). Other likely extinction triggers are large igneous provinces and asteroid impacts, of which the former is much more common. If I had to guess where the next large igneous province is going to emerge under the African Rift or the Canary Islands (look at the simulation in this article, and see where the big blobs of magma are close to the surface). It’s possible a fat LIP will emerge well before the next supercontinent forms, so that’s one, if not two, mass extinctions prior to the scenario time. And possibly a third if the suturing together of continents causes another mass extinction from radical amounts of mountain building. If you want to go most of a billion years, that’s possibly another seventeen extinction events, including two asteroid strikes, any number of LIPs, and petroleum-based civilization recreating itself from resequestered petroleum at least five if not ten times (every 100-200 million years).
- How does life survive extinction events? The quick answer is underground, which is why the super-rich building tunnels for trolls survival bunkers may be evolutionarily significant. I think there’s a decent case that animals and plants that can live extended periods underground tend to survive mass extinctions. Animals do this by, erm, making their burrows part of their extended phenotype (see the book referenced above), while plants normally hide seeds underground, which is why plant evolution really doesn’t show the changes during extinction events that animal evolution does. What tends to get erased by extinction events are complex ecosystems like coral reefs and forests. Forests after an extinction event are often quite different than prior, not just because the large herbivores are missing, but so are the specialist symbiotes (pollinators, pathogens, and parasites). These all take 5-20 million years to re-evolve, depending on the severity of the extinction event. Ditto for coral reefs.
- Going forward, as the Earth warms we can expect C4 photosynthesis to start dominating over C3 plants (the current norm). C3 photosynthesis evolved billions of years ago before there was a lot of oxygen in the atmosphere, and when the sun was dimmer. As a result, the key enzyme (rubisco) has a bad habit of screwing up when it’s too hot or there’s too little CO2 around. Plants have a lot of cellular machinery to deal with this (see photo-oxidation, heat shock proteins, and others). C4 basically adds a turbocharger to the photosynthetic cells to boost the level of CO2 encountered by rubisco (the “turbo” is a mnemonic, because C4 plants have a distinctive type of ring anatomy in the cells of their leaves which gives their nature away). The way it works is that some cells do C4 photosynthesis, creating a 4-carbon compound that is passed to other cells doing conventional C3 photosynthesis, adding extra carbon that gets broken down. C3 photosynthesis in the receiving cells then produces 3-carbon building blocks that get turned into 6 carbon sugar molecules. Anyway, C4 has evolved a number of times, but entirely among angiosperrms and entirely in the last hundred million years or less. The most familiar examples are maize, sugarcane, and sorghum, but it shows up in a number of dicot plants, mostly (but far from entirely) in the Caryophyllales and Euphorbiaceae. With the exception of a few rare trees on the Hawaiian Islands, C4 plants are entirely herbaceous, and the Hawaiian species are probably an example of the phenomenon of insular woodiness, which is incredibly cool if you’re a plant nerd (look it up).
The reason for C4 plants being herbaceous, often weedy, is actually important to scenario building for two reasons. One is that, in plants, radical new adaptations (flowers, compound flowers, etc) tend to pioneer their shtick as vagrants (e.g. weeds) in highly disturbed areas. Once they succeed in in these edgy venues, they start colonizing more complex, intact ecosystems, eventually evolving into dominant forest species and the like. This shows up in plant clades often starting off with small, wind- and gravity-dispersed seeds, then evolving towards bigger seeds and animal-dispersed fruits that are more suitable for competing in a forest. If you’re thinking about how plants evolve from weeds to forest giants after extinction events, this is how they do it, and this is why C4 plants are among those that will likely become more dominant in the future–many of our currently really obnoxious weeds are C4 plants.
The other thing is that C4 plants do better with bright lights and high temperatures than do C3 plants, but only up to a point. Corn, for example, overheats around 45oC, with grain production dropping rapidly as temperatures rise in this region and plants dying when they go past their limits. As a result, corn production is likely to take a huge hit in the next century, as areas that grow corn in the summer find it too hot to deal. C3 winter wheat production is modeled as being less harmed, because this cool season crop doesn’t hit its upper limits during climate change, and indeed it might do better. A century from now, cool season corn might be the thing, but the bigger point is that every plant has its upper limits. C4 stretches but does not eliminate those limits. That’s what keeps C3 plants around, in cooler and shadier areas. But yes, monsoon forests dominated by salt-oaks (the big-seeded descendants of todays chenopod salt bushes) could easily be a thing on Pangea Nextia. So could giant sugar cane brakes.
- Now let’s look at human evolution. Why expect humans to be around in hundreds of millions of years? This is what Hot Earth Dreams is about, and if you’re reading this blog, you may well have read the book. The tl;dr version is that I think that humans have two inheritance systems: genes and culture. We do evolve genetically, but culture evolves radically faster than genes do. If we want to become oceanic piscivores, we don’t evolve webbed feet, we learn how to build boats and create and use fishing gear. And if that no longer works as a lifestyle, we fisherfolk can go ashore and go into symbiosis with large ruminants (become cowboys) or whatever. The speed at which cultures adapt buffers what would otherwise be strong selective pressures on our genes, meaning our genetic evolution gets slowed. Since I don’t think people get to be fisherfolk for hundreds of generations (or civilized, or farmers, or cowboys, or artists, or whatever), this slows our genetic evolution down, which is why I think it’s plausible to assume that humans could survive into the very deep future.
This doesn’t mean that I think humans won’t evolve. In fact, some of the biggest genetic selection pressures on humans come not from evolution but from coevolution, from our relationships with other organisms. These show up in things like the rapid spread of lactose tolerance (due to our symbioses with dairy animals), various disease tolerance genes (due to exposure to epidemics due to living in settlements linked by long-distance trade routes), and possibly some genetic tolerance of things like alcohol and sugar (due to our symbioses with yeast and sucrose producers). I’m using the terminology of symbiosis because I really like Thompson’s geographic mosaic theory of coevolution (read about it here and here), but it’s basically that evolution proceeds through interaction among species within particular environments, so it’s about genotype 1 affecting genotype 2 while both interact in ecosystem A. Different interactions happen between other populations of the same organisms in different environments, and Mosaic coevolution is (IMHO) a really handy theory for worldbuilding, because it helps you understand how every place becomes different.
Humans domesticating other species and doing agriculture, forest management, hunting, fishing, and so on are all examples of how we interact with particular populations of various species in different, geographically bounded environments. Right now, at Peak Civilization, coevolution to deal with humans is the major selective force on a huge number of species. Either they must become a symbiont/pet/agricultural species, become a pest, a commensal, or become utterly useless and ignored. Oh, and they must survive with climate change, pollution, and anthropogenic habitat loss. Our current, relentless evolutionary pressure will change drastically over the course of the next century, most likely as our civilization crashes, but possibly if we figure out this quasi-mythical sustainability thing and calm down. Regardless, it’s a huge thing now, and coevolution with humans across all the habitats we occupy will continue to be a big thing into the future.
- Now imagine life coevolving with humans for hundreds of millions of years. Right now, a lot of animals don’t really understand humans the way domesticated species like dogs and horses do. But going forward, it’s likely that a wide variety (possibly a huge majority) of animals will evolve to become able to decode our signals and hack our cultures, again, the way dogs do. They won’t be smart in a human sense, but they’ll be clever like Clever Hans. Plants and fungi will do their own versions of this adaptation (you can read about it in Botany of Desire).
This may seem abstract, so let’s talk about the difference between Africa and Papua New Guinea. Modern humans first evolved in Africa over 300,000 years ago, while they got to Papua maybe 50,000 years ago. Africa sustained the fewest megafauna extinctions of any continent, while Papua had only a few megafauna-type animals (bear-sized) that were wiped out tens of thousands of years ago. However you feel about the whole Younger Dryas extinction thing, Africa seems to be a place where the wild animals know how to deal with people a lot better than just about anywhere else other than maybe south Asia. It’s reasonable to think this is due in part to the animals coevolving with evolving humans for a really, really long time compared with what animals in most of the rest of the world experienced.
Going forward 200,000,000 years, with humans continually present, and what animals in Africa do now in the way of dealing with humans will seem quaint. Animals will have coevolved with humans starting when they were rat-equivalents who had just survived an extinction event by hiding in bunkers with us. And they may have evolved to elephant size over the ten million years afterwards, also in continual proximity to us, despite, or perhaps because, of all we did to them. Whatever their relationships with us (partners, food, social parasites, predators, commensals, amensals, parasites, etc.), they will know us very, very well. And we’ll know them.
In the Papuan mountains, you can sit around a campfire, even go hunting at night, without worrying about anything more than an accident, getting malaria from a mosquito, or stepping on a snake. In the African bush, you’ve got all that, plus lethal encounters with lions, leopards, hyenas, and hippos (among others), so you surround your campsite with a boma of thorny branches to keep the problem species from eating you (or at least you keep the fire going all night), and you don’t hunt at night. The deep future will look like Africa, and it’s quite likely that the local megafauna will have coevolved with us. A boma may be the minimum needed. Or perhaps you’ll be able to make an arrangement with the equivalent of a local pack of hyenas to not eat you in exchange for you cooking whatever they catch and tending their den for them. The possibilities are endless.
And that doesn’t even include crops. There are several authors who argue (I think wrongly) that the rise of and continued existence of civilization depends on the extensive cultivation of grain, specifically barley, wheat, rice, or maize. I’m not going to regurgitate their argument, or why Hawai’i disproved it, but we humans have intimate and complex relationships with the grasses, including corn, sugar cane, bamboo, rice, wheat, barley, rye, sorghum, millet, tef, and so on (even lawns!). Given 100 million years or more of continued coevolution, and we may be as intimately connected to our grain crops as leafcutter ants are to their fungal colonies. Or not. But with crops, it’s not just worth considering what exotic new crops will evolve and what old crops stay, but also think about how our symbioses with plants will deepen and become richer and possibly more necessary over hundreds of millions of years. And if grains bore you, think about 200,000,000 years of caffeine or alcohol coevolution.
As for human culture, in Hot Earth Dreams I mentioned the theory that languages effectively randomize (with the exception of baby talk) over maybe 10,000 years. We’ll never know the languages of the last ice age, let alone the first human language, and in 10,000 years or less, English will have utterly vanished. Second, archaeologically and culturally we seem to have a window of around 5,000 years where we can know anything at all useful. Prior to that, we’re increasingly limited to whatever rare artifacts were randomly preserved. Looking at the first 300,000 years of human existence, there is very little we can know about most of that history, and it grows less every year.
Going into the future this erasure will get worse. Humans will have to recycle the ruins of old cities, because we will have exhausted readily available ore bodies, so we’ll have to remix the resources of the past to make the present. Do this for hundreds of millions of years, and that will be future culture. People will likely know that humans have been around “forever,” and they’ll likely have some idea of how long “forever” was (American Indians had some notion of deep time, because they had both exposed fossils and geologic evidence of past climates that was obvious enough for them to get it). But they won’t remember us. Mass extinctions and cultural erosion will see to that. Even things like race and ethnicity now appear only a few thousand years old, so they won’t look like us either.
For me, at least, the notion of hundreds of millions of years of coevolution with a world that complexifies the blurred boundaries between wild, feral, tame, domestic, and civilized is one of the chief appeals of a Far Deep Future scenario. The world may well be slowly dying, and humans will certainly be as flawed as ever, but we certainly won’t be alone. Instead, we’ll be surrounded by a full panoply of species, megaflora to nanofauna. Some of whom will work with us, more of whom will live with us, increasing numbers of which will live on or in us (stirges!), and still others will have evolved so that they are useless to us and avoid us completely. And some will love us for what they get from us, whether or not we reciprocate their feelings. Humans that survive into the deep future will be recognizably human, definitely understandable, but they won’t be us, and the world they live with will think we’re standoffish, isolated, and socially inept (wild even, feral at best) compared to them.
What did I miss?
Views: 14
One of the most poignant ironies of the Internet is that at the very time that it’s become increasingly difficult for anyone to conduct their day to day lives without using the Net, some categories of people are increasingly being treated badly by many software designers. The victims of these attitudes include various special needs groups — visually and/or motor impaired are just two examples — but the elderly are a particular target.
Working routinely with extremely elderly persons who are very active Internet users (including in their upper 90s!), I’m particularly sensitive to the difficulties that they face keeping their Net lifelines going.
Often they’re working on very old computers, without the resources (financial or human) to permit them to upgrade. They may still be running very old, admittedly risky OS versions and old browsers — Windows 7 is going to be used by many for years to come, despite hitting its official “end of life” for updates a few days ago.
Yet these elderly users are increasing dependent on the Net to pay bills (more and more firms are making alternatives increasingly difficult and in some cases expensive), to stay in touch with friends and loved ones, and for many of the other routine purposes for which all of us now routinely depend on these technologies.
This is a difficult state of affairs, to say the least.
But there’s an aspect of this that is even worse. It’s attitudes! It’s the attitudes of many software designers that suggest they apparently really don’t care about this class of users much — or at all.
They design interfaces that are difficult for these users to navigate. Or in extreme cases, they simply drop support for many of these users entirely, by eliminating functionality that permits their old systems and old browsers to function.
We can certainly stipulate that using old browsers and old operating systems is dangerous. In a perfect world, resources would be available to get everyone out of this situation.
But of course we don’t exist in a perfect world, and these users, who are already often so disadvantaged in so many other ways, need support from software designers, not disdain or benign neglect.
A current example of these users being left behind is the otherwise excellent, open source “Discourse” forum software. I use this software myself, and it’s a wonderful project.
Recently they announced that they would be pulling all support for Internet Explorer (except for limited read-only access) from the Discourse software. Certainly they are not the only site or project dropping support for old browsers, but this fact does not eliminate the dilemma.
I despise Internet Explorer. And yes, old computers running old OS versions and old browsers represent security risks to their users. Definitely. No question about it. But what of the users who don’t understand how to upgrade? Who don’t have anyone to help them upgrade? Are we to tell them that they matter not at all? Is the plan to try ignore them as much as possible until they’re all dead and gone? Newsflash: This category of users will always exist!
This issue rose to the top of my morning queue today when I saw a tweet from Jeff Atwood (@codinghorror). Jeff is the force behind the creation and evolution of Discourse, and was a co-founder of Stack Exchange. He does seriously good work.
Yet this morning we engaged in the following tweet thread:
Jeff: At this point I am literally counting the days until we can fully remove IE11 support in @discourse (June 1st 2020)
Lauren: I remain concerned about the impact this will have on already marginalized users on old systems without the skills or help to switch to other browsers. They have enough problems already!
Jeff: Their systems are so old they become extremely vulnerable to hackers and exploits, which is bad for their health and the public health of everyone else near them. It becomes an anti-vaccination argument, in which nobody wins.
Lauren: Do you regularly work with extremely elderly people whose only lifelines are their old computers? Serious question.
Somewhere around this point, he closed down the dialogue by blocking me on Twitter.
This was of course his choice, but seems a bit sad when I actually had more fruitful discussions of this matter previously on the main Discourse discussion forum itself.
Of course his anti-vaxx comparison is inherently flawed. There are virtually always ways for people who can’t afford important vaccinations to receive them. Not so for upgrading computer hardware, software, or getting help working with those systems, particularly for elderly persons living in isolation.
Yes, the world will keep spinning after Discourse drops IE support.
Far more important though than this particular case is the attitude being expressed by so many in the software community, an attitude that suggests that many highly capable software engineers don’t really appreciate these users and the kinds of problems that many of these users may have, that can prevent them from making even relatively simple changes or upgrades to their systems — which they need to keep using as much as anyone — in the real world.
And that’s an unnecessary tragedy.
–Lauren–
Lenin Lives!Philip Cunliffe
Zero Books, 2016
It can be disconcerting to read a book that upends your way of looking at the world. It's even more disconcerting when that book claims your own work as part of its inspiration. About which, more later.
The book's title and Soviet-kitsch cover are deeply ironic: baiting for some, and bait for others. In the alternate-history world Cunliffe imagines, Lenin is almost forgotten, because he succeeded. (It's tempting to add 'beyond his wildest dreams' but success beyond Lenin's wildest dreams would have meant spreading the revolution to the canal-builders of Mars.)
For what Lenin and the Bolsheviks set out to do in 1917 was to detonate an international, indeed global, revolution. This was an immediate perspective, where revolutionary romanticism meant staking all on the world revolution breaking out next week, while sober realism meant bearing in mind that it might be delayed for a few more months. In fact even the realists were too optimistic: it was delayed for a whole year. In November 1918 the red flags went up over the naval base at Kiel, and flew over all Germany within days. And then...
Well, everybody knows.
But what if the grip of German Social Democratic reformism had been that little bit shakier, and the revolutionary Left that little bit better organised and luckier? Cunliffe speculates on what sort of world might now exist, and how it might have come about, if the revolution that began in Russia had not only spread - as it did - but won, as it didn't.
In this missed turn of history, a decade or so of wars and civil wars see the capitalist core countries having gone socialist. The major independent underdeveloped countries have gone democratic, and the former colonial holdings have mostly opted to remain in loose voluntary federations that have replaced the empires. It's not all plain sailing but the resulting democratic workers' states of Europe and America are much less repressive than Bolshevik, let alone Stalinist, Russia was in our world. Planning emerges from increasing coordination (as indeed it did under the New Economic Policy) rather central imposition. Industrialisation proceeds at a brisk but measured, rather than a frantic, pace. Art, science, culture and personal freedom flourish. This is a world with no fascism or Stalinism, no Depression and no Second World War. Whether or not the reader finds it feasible or desirable, it's attractively and vigorously portrayed.
Cunliffe's alternate history has no decisive moment (no Jonbar Point, to use the science-fictional term) that I can see. Instead, the international revolutionary working-class movement (which, as Cunliffe usefully and repeatedly reminds us) actually did exist at the time is imagined as having been just a little bit stronger in arm and clearer in mind than it was in our world. It's by no means an unrealistic speculation. Even in our world, it was a close-run thing. So close, in fact, that stamping out every last smouldering ember of world revolution took tens of years and tens of millions of lives. But its suppression is now, at last, complete.
E. H. Carr, in an article or interview for New Left Review, remarked that all of Marx's predictions had come true, except for the proletarian revolution. Cunliffe's view is gloomier: he thinks that they all came true, including the revolution. It really happened, in 1917-1923, and the revolutionaries bungled it.
When most readers of the Communist Manifesto encounter the passage about how throughout history classes have waged 'an uninterrupted, now hidden, now open fight, a fight that each time ended, either in the revolutionary reconstitution of society at large, or in the common ruin of the contending classes' the example that springs to mind is the Fall of the Roman Empire to the barbarians. What Marx and Engels were really alluding to, Cunliffe argues, was subtly different: the Fall of the Republic, rather than the Fall of the Empire. It was the class struggles of patrician and plebeian in the Roman Republic that ended in mutual ruination, and stymied any chance of further progress centuries before the Empire fell.
If readers of the Manifesto are socialists, the common ruin they envisage for bourgeoisie and proletariat is a nuclear war or environmental catastrophe. No such luck, Cunliffe tells us: the common ruin has already happened. The class struggle between bourgeoisie and proletariat is over. The good guys lost. Get over it.
But the non-socialist reader can take no comfort. The suppression of communism, Cunliffe claims, undermined capitalism, sapping its economic dynamism and political stability. With no competing model - however unattractive in many respects - to keep it on its toes, capitalism becomes a couch potato. With no union militancy and shop-floor organisation to contend with, capitalists have less incentive to innovate and rationalise. With no need to integrate the working class in the affairs of state, mass political participation and engagement have been texted their redundancy notices.
The result, however, is that the elites and the rest of the population are more mutually alienated than they ever were in the class struggle. To the political class and state authorities, the ideas and attitudes of the underlying multitude are as a dark continent, viewed with alarm and suspicion, alternately patronised and deplored. Unmoored from the clash of material interests, politics drifts into a Sargasso Sea of slowly, pointlessly, endlessly swirling debris. Debate degenerates into a grandstanding narcissism of small differences around an elite consensus dedicated solely to keeping the show on the road. Political apathy and populist eruptions are its morbid symptoms. The ruin was mutual, and the ruins are where we must henceforth live.
This exhausted order could in principle totter along indefinitely, were it not for the instabilities, internal and external, that result. The political and moral authority of the state quietly unravels, even as its hard power and reach expand. As Britain's riots of 2011 starkly exposed, social order itself can dissipate overnight. And the quest for moral authority at home is transmitted all too easily into rash adventuring abroad, in the name of democratic and liberal values. To explain, say, the Iraq war as motivated by strategic or economic concerns, a 'war for oil', as leftists are wont to do, is misconceived. There's no underlying interest to expose: the war's liberal-democratic rationalisation really is what it's all about. As Tony Blair said: 'It's worse than you think. I believe in it.'
Readers of my own novels, particularly the Fall Revolution books and some of the more recent ones such as Intrusion and Descent, may find some of the themes outlined above familiar. In the early 1990s when I started writing my first novel, I was convinced that the Left had suffered a whopping, world-historic defeat with the fall of the Soviet bloc regardless of how critical or even hostile they had been to it. However, I did expect that this defeat would in time be overcome.
Whatever else it does, Lenin Lives! answers a question that has baffled better minds than mine: how on earth did a splinter of the far left mutate into a cadre of contrarian libertarian Brexiters? Two lines of explanation are often explored. The first is that they remain revolutionary communists under deep cover, engaged in some nefarious long-term scheme. The second is that they have been themselves subverted, suborned by the corporations from which they receive funding. I could go into the various reasons why both are wide of the mark, but I've already gone on long enough. By now you can figure it out for yourself:
It's worse than you think. They really believe in it.
I express my network in a FOAF file, and that is the start of the revolution. —Tim Berners-Lee (2007)
The FOAF standard, or Friend of a Friend standard, is a now largely defunct/ignored/superseded1 web standard dating from the early 2000s that hints at what social networking might have looked like had Facebook not conquered the world. Before we talk about FOAF though, I want to talk about the New York City Subway.
The New York City Subway is controlled by a single entity, the Metropolitan Transportation Agency, better known as the MTA. The MTA has a monopoly on subway travel in New York City. There is no legal way to travel in New York City by subway without purchasing a ticket from the MTA. The MTA has no competitors, at least not in the "subway space."
This wasn't always true. Surprisingly, the subway system was once run by two corporations that competed with each other. The Inter-borough Rapid Transit Company (IRT) operated lines that ran mostly through Manhattan, while the Brooklyn-Manhattan Transit Corporation (BMT) operated lines in Brooklyn, some of which extended into Manhattan also. In 1932, the City opened its own service called the Independent Subway System to compete with the IRT and BMT, and so for a while there were three different organizations running subway lines in New York City.
One imagines that this was not an effective way to run a subway. It was not. Constructing interchanges between the various systems was challenging because the IRT and BMT used trains of different widths. Interchange stations also had to have at least two different fare-collection areas since passengers switching trains would have to pay multiple operators. The City eventually took over the IRT and BMT in 1940, bringing the whole system together under one operator, but some of the inefficiencies that the original division entailed are still problems today: Trains designed to run along lines inherited from the BMT (e.g. the A, C, or E) cannot run along lines inherited from the IRT (e.g. the 1, 2, or 3) because the IRT tunnels are too narrow. As a result, the MTA has to maintain two different fleets of mutually incompatible subway cars, presumably at significant additional expense relative to other subway systems in the world that only have to deal with a single tunnel width.
This legacy of the competition between the IRT and BMT suggests that subway systems naturally tend toward monopoly. It just makes more sense for there to be a single operator than for there to be competing operators. Average passengers are amply compensated for the loss of choice by never having to worry about whether they brought their IRT MetroCard today but forgot their BMT MetroCard at home.
Okay, so what does the Subway have to do with social networking? Well, I have wondered for a while now whether Facebook has, like the MTA, a natural monopoly. Facebook does seem to have a monopoly, whether natural or unnatural—not over social media per se (I spend much more time on Twitter), but over my internet social connections with real people I know. It has a monopoly over, as they call it, my digitized "social graph"; I would quit Facebook tomorrow if I didn't worry that by doing so I might lose many of those connections. I get angry about this power that Facebook has over me. I get angry in a way that I do not get angry about the MTA, even though the Subway is, metaphorically and literally, a sprawling trash fire. And I suppose I get angry because at root I believe that Facebook's monopoly, unlike the MTA's, is not a natural one.
What this must mean is that I think Facebook owns all of our social data now because they happened to get there first and then dig a big moat around themselves, not because a world with competing Facebook-like platforms is inefficient or impossible. Is that true, though? There are some good reasons to think it isn't: Did Facebook simply get there first, or did they instead just do social networking better than everyone else? Isn't the fact that there is only one Facebook actually convenient if you are trying to figure out how to contact an old friend? In a world of competing Facebooks, what would it mean if you and your boyfriend are now Facebook official, but he still hasn't gotten around to updating his relationship status on VisageBook, which still says he is in a relationship with his college ex? Which site will people trust? Also, if there were multiple sites, wouldn't everyone spend a lot more time filling out web forms?
In the last few years, as the disadvantages of centralized social networks have dramatically made themselves apparent, many people have attempted to create decentralized alternatives. These alternatives are based on open standards that could potentially support an ecosystem of inter-operating social networks (see e.g. the Fediverse). But none of these alternatives has yet supplanted a dominant social network. One obvious explanation for why this hasn't happened is the power of network effects: With everyone already on Facebook, any one person thinking of leaving faces a high cost for doing so. Some might say this proves that social networks are natural monopolies and stop there; I would say that Facebook, Twitter, et al. chose to be walled gardens, and given that people have envisioned and even built social networks that inter-operate, the network effects that closed platforms enjoy tell us little about the inherent nature of social networks.
So the real question, in my mind, is: Do platforms like Facebook continue to dominate merely because of their network effects, or is having a single dominant social network more efficient in the same way that having a single operator for a subway system is more efficient?
Which finally brings me back to FOAF. Much of the world seems to have forgotten about the FOAF standard, but FOAF was an attempt to build a decentralized and open social network before anyone had even heard of Facebook. If any decentralized social network ever had a chance of occupying the redoubt that Facebook now occupies before Facebook got there, it was FOAF. Given that a large fraction of humanity now has a Facebook account, and given that relatively few people know about FOAF, should we conclude that social networking, like subway travel, really does lend itself to centralization and natural monopoly? Or does the FOAF project demonstrate that decentralized social networking was a feasible alternative that never became popular for other reasons?
The Future from the Early AughtsThe FOAF project, begun in 2000, set out to create a universal standard for describing people and the relationships between them. That might strike you as a wildly ambitious goal today, but aspirations like that were par for the course in the late 1990s and early 2000s. The web (as people still called it then) had just trounced closed systems like America Online and Prodigy. It could only have been natural to assume that further innovation in computing would involve the open, standards-based approach embodied by the web.
Many people believed that the next big thing was for the web to evolve into something called the Semantic Web. I have written about what exactly the Semantic Web was supposed to be and how it was supposed to work before, so I won't go into detail here. But I will sketch the basic vision motivating the people who worked on Semantic Web technologies, because the FOAF standard was an application of that vision to social networking.
There is an essay called "How Google beat Amazon and Ebay to the Semantic Web" that captures the lofty dream of the Semantic Web well. It was written by Paul Ford in 2002. The essay imagines a future (as imminent as 2009) in which Google, by embracing the Semantic Web, has replaced Amazon and eBay as the dominant e-commerce platform. In this future, you can search for something you want to purchase—perhaps a second-hand Martin guitar—by entering buy:martin guitar into Google. Google then shows you all the people near your zipcode selling Martin guitars. Google knows about these people and their guitars because Google can read RDF, a markup language and core Semantic Web technology focused on expressing relationships. Regular people can embed RDF on their web pages to advertise (among many other things) the items they have to sell. Ford predicts that as the number of people searching for and advertising products this way grows, Amazon and eBay will lose their near-monopolies over, respectively, first-hand and second-hand e-commerce. Nobody will want to search a single centralized database for something to buy when they could instead search the whole web. Even Google, Ford writes, will eventually lose its advantage, because in theory anyone could crawl the web reading RDF and offer a search feature similar to Google's. At the very least, if Google wanted to make money from its Semantic Web marketplace by charging a percentage of each transaction, that percentage would probably by forced down over time by competitors offering a more attractive deal.
Ford's imagined future was an application of RDF, or the Resource Description Framework, to e-commerce, but the exciting thing about RDF was that hypothetically it could be used for anything. The RDF standard, along with a constellation of related standards, once widely adopted, was supposed to blow open database-backed software services on the internet the same way HTML had blown open document publishing on the internet.
One arena that RDF and other Semantic Web technologies seemed poised to takeover immediately was social networking. The FOAF project, known originally as "RDF Web Ring" before being renamed, was the Semantic Web effort offshoot that sought to accomplish this. FOAF was so promising in its infancy that some people thought it would inevitably make all other social networking sites obsolete. A 2004 Guardian article about the project introduced FOAF this way:
In the beginning, way back in 1996, it was SixDegrees. Last year, it was Friendster. Last week, it was Orkut. Next week, it could be Flickr. All these websites, and dozens more, are designed to build networks of friends, and they are currently at the forefront of the trendiest internet development: social networking. But unless they can start to offer more substantial benefits, it is hard to see them all surviving, once the Friend Of A Friend (FOAF) standard becomes a normal part of life on the net.2
The article goes on to complain that the biggest problem with social networking is that there are too many social networking sites. Something is needed that can connect all of the different networks together. FOAF is the solution, and it will revolutionize social networking as a result.
FOAF, according to the article, would tie the different networks together by doing three key things:
- It would establish a machine-readable format for social data that could be read by any social networking site, saving users from having to enter this information over and over again
- It would allow "personal information management programs," i.e. your "Contacts" application, to generate a file in this machine-readable format that you could feed to social networking sites
- It would further allow this machine-readable format to be hosted on personal homepages and read remotely by social networking sites, meaning that you would be able to keep your various profiles up-to-date by just pushing changes to your own homepage
It is hard to believe today, but the problem in 2004, at least for savvy webizens and technology columnists aware of all the latest sites, was not the lack of alternative social networks but instead the proliferation of them. Given that problem—so alien to us now—one can see why it made sense to pursue a single standard that promised to make the proliferation of networks less of a burden.
The FOAF SpecAccording to the description currently given on the FOAF project's website, FOAF is "a computer language defining a dictionary of people-related terms that can be used in structured data." Back in 2000, in a document they wrote to explain the project's goals, Dan Brickley and Libby Miller, FOAF's creators, offered a different description that suggests more about the technology's ultimate purpose—they introduced FOAF as a tool that would allow computers to read the personal information you put on your homepage the same way that other humans do.3 FOAF would "help the web do the sorts of things that are currently the proprietary offering of centralised services."4 By defining a standard vocabulary for people and the relationships between them, FOAF would allow you to ask the web questions such as, "Find me today's web recommendations made by people who work for Medical organizations," or "Find me recent publications by people I've co-authored documents with."
Since FOAF is a standardized vocabulary, the most important output of the FOAF project was the FOAF specification. The FOAF specification defines a small collection of RDF classes and RDF properties. (I'm not going to explain RDF here, but again see my post about the Semantic Web if you want to know more.) The RDF classes defined by the FOAF specification represent subjects you might want to describe, such as people (the Person class) and organizations (the Organization class). The RDF properties defined by the FOAF specification represent logical statements you might make about the different subjects. A person could have, for example, a first name (the givenName property), a last name (the familyName property), perhaps even a personality type (the myersBriggs property), and be near another person or location (the based_near property). The idea was that these classes and properties would be sufficient to represent the kind of the things people say about themselves and their friends on their personal homepage.
The FOAF specification gives the following as an example of a well-formed FOAF document. This example uses XML, though an equivalent document could be written using JSON or a number of other formats:
<foaf:Person rdf:about="#danbri" xmlns:foaf="http://xmlns.com/foaf/0.1/"> <foaf:name>Dan Brickley</foaf:name> <foaf:homepage rdf:resource="http://danbri.org/" /> <foaf:openid rdf:resource="http://danbri.org/" /> <foaf:img rdf:resource="/images/me.jpg" /> </foaf:Person>
This FOAF document describes a person named "Dan Brickley" (one of the specification's authors) that has a homepage at http://danbri.org, something called an "open ID," and a picture available at /images/me.jpg, presumably relative to the base address of Brickley's homepage. The FOAF-specific terms are prefixed by foaf:, indicating that they are part of the FOAF namespace, while the more general RDF terms are prefixed by rdf:.
Just to persuade you that FOAF isn't tied to XML, here is a similar FOAF example from Wikipedia, expressed using a format called JSON-LD5:
{
"@context": {
"name": "http://xmlns.com/foaf/0.1/name",
"homepage": {
"@id": "http://xmlns.com/foaf/0.1/workplaceHomepage",
"@type": "@id"
},
"Person": "http://xmlns.com/foaf/0.1/Person"
},
"@id": "https://me.example.com",
"@type": "Person",
"name": "John Smith",
"homepage": "https://www.example.com/"
}
This FOAF document describes a person named John Smith with a homepage at www.example.com.
Perhaps the best way to get a feel for how FOAF works is to play around with FOAF-a-matic, a web tool for generating FOAF documents. It allows you to enter information about yourself using a web form, then uses that information to create the FOAF document (in XML) that represents you. FOAF-a-matic demonstrates how FOAF could have been used to save everyone from having to enter their social information into a web form ever again—if every social networking site could read FOAF, all you'd need to do to sign up for a new site is point the site to the FOAF document that FOAF-a-matic generated for you.
Here is a slightly more complicated FOAF example, representing me, that I created using FOAF-a-matic:
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:admin="http://webns.net/mvcb/">
<foaf:PersonalProfileDocument rdf:about="">
<foaf:maker rdf:resource="#me"/>
<foaf:primaryTopic rdf:resource="#me"/>
<admin:generatorAgent rdf:resource="http://www.ldodds.com/foaf/foaf-a-matic"/>
<admin:errorReportsTo rdf:resource="mailto:leigh@ldodds.com"/>
</foaf:PersonalProfileDocument>
<foaf:Person rdf:ID="me">
<foaf:name>Sinclair Target</foaf:name>
<foaf:givenname>Sinclair</foaf:givenname>
<foaf:family_name>Target</foaf:family_name>
<foaf:mbox rdf:resource="mailto:sinclairtarget@example.com"/>
<foaf:homepage rdf:resource="sinclairtarget.com"/>
<foaf:knows>
<foaf:Person>
<foaf:name>John Smith</foaf:name>
<foaf:mbox rdf:resource="mailto:johnsmith@example.com"/>
<rdfs:seeAlso rdf:resource="www.example.com/foaf.rdf"/>
</foaf:Person>
</foaf:knows>
</foaf:Person>
</rdf:RDF>
This example has quite a lot of preamble setting up the various XML namespaces used by the document. There is also a section containing data about the tool that was used to generate the document, largely so that, it seems, people know whom to email with complaints. The foaf:Person element describing me tells you my name, email address, and homepage. There is also a nested foaf:knows element telling you that I am friends with John Smith.
This example illustrates another important feature of FOAF documents: They can link to each other. If you remember from the previous example, my friend John Smith has a homepage at www.example.com. In this example, where I list John Smith as a foaf:person with whom I have a foaf:knows relationship, I also provide a rdfs:seeAlso element that points to John Smith's FOAF document hosted on his homepage. Because I have provided this link, any program reading my FOAF document could find out more about John Smith by following the link and reading his FOAF document. In the FOAF document we have for John Smith above, John did not provide any information about his friends (including me, meaning, tragically, that our friendship is unidirectional). But if he had, then the program reading my document could find out not only about me but also about John, his friends, their friends, and so on, until the program has crawled the whole social graph that John and I inhabit.
This functionality will seem familiar to anyone that has used Facebook, which is to say that this functionality will seem familiar to you. There is no foaf:wall property or foaf:poke property to replicate Facebook's feature set exactly. Obviously, there is also no slick blue user interface that everyone can use to visualize their FOAF social network; FOAF is just a vocabulary. But Facebook's core feature—the feature that I have argued is key to Facebook's monopoly power over, at the very least, myself—is here provided in a distributed way. FOAF allows a group of friends to represent their real-life social graph digitally by hosting FOAF documents on their own homepages. It allows them to do this without surrendering control of their data to a centralized database in the sky run by a billionaire android-man who spends much of his time apologizing before congressional committees.
FOAF on IceIf you visit the current FOAF project homepage, you will notice that, in the top right corner, there is an image of the character Fry from the TV series Futurama, stuck inside some sort of stasis chamber. This is a still from the pilot episode of Futurama, in which Fry gets frozen in a cryogenic tank in 1999 only to awake a millennium later in 2999. Brickley, whom I messaged briefly on Twitter, told me that he put that image there as a way communicating that the FOAF project is currently "in stasis," though he hopes that there will be a future opportunity to resuscitate the project along with its early 2000s optimism about how the web should work.
FOAF never revolutionized social networking the way that the 2004 Guardian article about it expected it would. Some social networking sites decided to support the standard: LiveJournal and MyOpera are examples.6 FOAF even played a role in Howard Dean's presidential campaign in 2004—a group of bloggers and programmers got together to create a network of websites they called "DeanSpace" to promote the campaign, and these sites used FOAF to keep track of supporters and volunteers.7 But today FOAF is known primarily for being one of the more widely used vocabularies of RDF, itself a niche standard on the modern web. If FOAF is part of your experience of the web today at all, then it is as an ancestor to the technology that powers Google's "knowledge panels" (the little sidebars that tell you the basics about a person or a thing if you searched for something simple). Google uses vocabularies published by the schema.org project—the modern heir to the Semantic Web effort—to populate its knowledge panels.8 The schema.org vocabulary for describing people seems to be somewhat inspired by FOAF and serves many of the same purposes.
So why didn't FOAF succeed? Why do we all use Facebook now instead? Let's ignore that FOAF is a simple standard with nowhere near as many features as Facebook—that's true today, clearly, but if FOAF had enjoyed more momentum it's possible that applications could have been built on top of it to deliver a Facebook-like experience. The interesting question is: Why didn't this nascent form of distributed social networking catch fire when Facebook was not yet around to compete with it?
There probably is no single answer to that question, but if I had to pick one, I think the biggest issue is that FOAF only makes sense on a web where everyone has a personal website. In the late 1990s and early 2000s, it might have been easy to assume the web would eventually look like this, especially since so many of the web's early adopters were, as far as I can tell, prolific bloggers or politically engaged technologists excited to have a platform. But the reality is that regular people don't want to have to learn how to host a website. FOAF allows you to control your own social information and broadcast it to social networks instead of filling out endless web forms, which sounds pretty great if you already have somewhere to host that information. But most people in practice found it easier to just fill out the web form and sign up for Facebook than to figure out how to buy a domain and host some XML.
What does this mean for my original question about whether or not Facebook's monopoly is a natural one? I think I have to concede that the FOAF example is evidence that social networking does naturally lend itself to monopoly.
That people did not want to host their own data isn't especially meaningful itself—modern distributed social networks like Mastodon have solved that problem by letting regular users host their profiles on nodes set up by more savvy users. It is a sign, however, of just how much people hate complexity. This is bad news for decentralized social networks, because they are inherently more complex under the hood than centralized networks in a way that is often impossible to hide from users.
Consider FOAF: If I were to write an application that read FOAF data from personal websites, what would I do if Sally's FOAF document mentions a John Smith with a homepage at example.com, and Sue's FOAF document mentions a John Smith with a homepage at example.net? Are we talking about a single John Smith with two websites or two entirely different John Smiths? What if the both FOAF documents list John Smith's email as johnsmith@gmail.com? This issue of identity was an acute one for FOAF. In a 2003 email, Brickley wrote that because there does not exist and probably should not exist a "planet-wide system for identifying people," the approach taken by FOAF is "pluralistic."9 Some properties of FOAF people, such as email addresses and homepage addresses, are special in that their values are globally unique. So these different properties can be used to merge (or, as Libby Miller called it, "smoosh") FOAF documents about people together. But none of these special properties are privileged above the others, so it's not obvious how to handle our John Smith case. Do we trust the homepages and conclude we have two different people? Or do we trust the email addresses and conclude we have a single person? Could I really write an application capable of resolving this conflict without involving (and inconveniencing) the user?
Facebook, with its single database and lack of political qualms, could create a "planet-wide system for identifying people" and so just gave every person a unique Facebook ID. Problem solved.
Complexity alone might not doom distributed social networks if people cared about being able to own and control their data. But FOAF's failure to take off demonstrates that people have never valued control very highly. As one blogger has put it, "'Users want to own their own data' is an ideology, not a use case."10 If users do not value control enough to stomach additional complexity, and if centralized systems are more simple than distributed ones—and if, further, centralized systems tend to be closed and thus the successful ones enjoy powerful network effects—then social networks are indeed natural monopolies.
That said, I think there is still a distinction to be drawn between the subway system case and the social networking case. I am comfortable with the MTA's monopoly on subway travel because I expect subway systems to be natural monopolies for a long time to come. If there is going to be only one operator of the New York City Subway, then it ought to be the government, which is at least nominally more accountable than a private company with no competitors. But I do not expect social networks to stay natural monopolies. The Subway is carved in granite; the digital world is writ in water. Distributed social networks may now be more complicated than centralized networks in the same way that carrying two MetroCards is more complicated than carrying one. In the future, though, the web, or even the internet, could change in fundamental ways that make distributed technology much easier to use.
If that happens, perhaps FOAF will be remembered as the first attempt to build the kind of social network that humanity, after a brief experiment with corporate mega-databases, does and always will prefer.
If you enjoyed this post, more like it come out every four weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.
Previously on TwoBitHistory…
I know it's been too long since my last post, but my new one is here! I wrote almost 5000 words on John Carmack, Doom, and the history of the binary space partitioning tree.https://t.co/SVunDZ0hZ1
— TwoBitHistory (@TwoBitHistory) November 6, 2019
-
Please note that I did not dare say "dead." ↩
-
Jack Schofield, "Let's be Friendsters," The Guardian, February 19, 2004, accessed January 5, 2020, https://www.theguardian.com/technology/2004/feb/19/newmedia.media. ↩
-
Dan Brickley and Libby Miller, "Introducing FOAF," FOAF Project, 2008, accessed January 5, 2020, https://web.archive.org/web/20140331104046/http://www.foaf-project.org/original-intro. ↩
-
ibid. ↩
-
Wikipedia contributors, "JSON-LD," Wikipedia: The Free Encyclopedia, December 13, 2019, accessed January 5, 2020, https://en.wikipedia.org/wiki/JSON-LD. ↩
-
"Data Sources," FOAF Project Wiki, December 11 2009, accessed January 5, 2020, https://web.archive.org/web/20100226072731/http://wiki.foaf-project.org/w/DataSources. ↩
-
Aldon Hynes, "What is Dean Space?", Extreme Democracy, accessed January 5, 2020, http://www.extremedemocracy.com/chapters/Chapter18-Hynes.pdf. ↩
-
"Understand how structured data works," Google Developer Portal, accessed January 5, 2020, https://developers.google.com/search/docs/guides/intro-structured-data. ↩
-
tef, "Why your distributed network will not work," Progamming is Terrible, January 2, 2013, https://programmingisterrible.com/post/39438834308/distributed-social-network. ↩
-
Dan Brickley, "Identifying things in FOAF," rdfweb-dev Mailing List, July 10, 2003, accessed on January 5, 2020, http://lists.foaf-project.org/pipermail/foaf-dev/2003-July/005463.html. ↩
It’s almost the end of the year, so here are some predictions for 2020 and the 2020s.
Let’s get the easy ones out of the way:
- Climate change will continue to accelerate.
- Greenhouse gas emissions will continue to increase, and the rate of increase will be positive, possibly more positive than in 2019.
- We’ll see extreme weather events, including extreme snowfall, extreme winds, massive cyclones, record-breaking drought, and extreme wildfire. Somewhere. I don’t know which will occur when or where, though.
- Business will continue to look for ways to profit off climate change, instead of stopping it.
- People will continue protesting about climate issues, but since the movement leaders seem to be still reinventing the wheel and using ideas, tactics, and strategies that The Establishment has good counters to, they won’t get as much done as needs to be done. And that will really and truly suck.
I’ll stick my neck out make some political predictions:
- A billionaire will win the 2020 US election.
- Trump will ultimately be indicted on counts they didn’t bother to impeach him over (either in 2021 or 2025).
- Mitch McConnell will win his re-election.
- Billionaires will continue to gain control. Also, billionaires will continue to demonstrate that wealth is a good substitute for intelligence and character.
- I will continue to be a contrarian pessimist who delights in giving reality opportunities to demonstrate how wrong my political prognostications are….
On the science and technology front:
- Facebook and Social Media in general will increasingly become “so last decade,” especially as political, social, and hacking issues mount.
- The truism that anything connected to the internet can be hacked will be demonstrated in a new way (this is going to be annoying to check in December 2020).
- AI’s potential and shortcomings will become more evident.
- Skeet- and trap-shooting will become more popular among non-hunters. In possibly unrelated news, people will publish thought pieces about how to deal with the problems drone pose (checking this will be tricky too. Oh well.).
- There will be new battery technologies tested in labs and covered breathlessly by the tech press. I’ll stick my neck out and say that one new battery technology actually demonstrates it can scale up to commercial use.
- Disputes over lithium and sand will continue. Phosphorus shortages will get coverage as a growing problem.
- The fishing industry will get some horrific expose, either about working conditions, effects on sea life, and/or looming extinction of favorite food fishes.
- Some tiny advance in nuclear fusion will be trumpeted as heralding the dawn of nuclear fusion as a power source.
- A company focused on finding new antibiotics and bringing them to market will go out of business.
And a few more random predictions:
- Some people (other than me) will call the decade we’re leaving “The Terrible Teens.” Perhaps the decade we’re entering will be called “The Howling Twenties”?
- Habitat gardening will increasingly become a thing (I’m cheating, because the group I’m active with is holding a workshop on this in a few weeks).
- There will be a recession in the housing industry.
- I will be able to post more on this blog.
- I’ll be posting about the results of my predictions came to reality next December.
Anyway, congratulations on successfully escaping 2019. I hope your 2020 is no worse, possibly even better, than 2019 was for you.
What are your predictions?
Just some positive thinking–what’s happening to me?–for the holidays. This is another in my series of alt-histories I wish someone else would write, and it’s kind of in the spirit of DC’s famous Watchmen comic.
The idea is pretty simple: how do you write an alt-history where people take climate change seriously enough to do something about it? I’ve butted heads long enough to know that most people in the Baby Boom and Gen X (at least in Middle and Upper Class America) are not interested in dealing with climate change if it causes them any serious inconvenience. I sympathize with the kids of “generation omega” (they’re now hitting college, and I hope the GenO doesn’t catch on). They’re scared and furious, and they should be. I’m getting sick of how many ways I’ve heard people find ways to not do anything. Either we’re all going to hell anyway, or there’s no problem, or there’s a problem but nothing we do as individuals matters, or –look, squirrel/nuclear power/don’t eat meat/distract/disrupt/distract/bullshit….–I get this a lot when I talk with people. Trump’s rubbing off on everyone.
So anyway, I’d suggest a different take on cli-fi. It was sort of done in Watchmen, but it could be done better. The idea starts with someone leaking all the research the petroleum companies did on climate change back in the 1950s and 1960s, with the document leak happening in the 1970s if not before. At the height of the environmental movement, Boomers start taking climate change seriously, putting us decades ahead of where we are now.
I don’t think it would necessarily be blue skies and blooming roses, but it might be pleasant for writers and environmentalists to think about what might have been, had we given a shit when we had the oil to really rebuild global infrastructure instead of going on a mad, metastatic building spree as we did in the real world. Heck, if you’re interested in writing this story universe, have Timothy Leary get run over by a hippy’s bus, so that LSD becomes legal too, while you’re at it.
As with any of my crazy ideas, feel free to take it and run with it. I’m working on something entirely different.
Back on December 27, 2018, I posted a set of predictions. I haven’t posted much since then, because I’ve been annoyingly busy with conservation work, fighting a bunch of leapfrog sprawl developments in San Diego County. Most of that I can’t really talk about due to litigation issues, but I can at least go over the predictions I made a year ago and score how well I did.
Here they are.
- That I'll write a column in December 2019 about what I got right and wrong. This means, among other things, that I won't die in a pandemic or nuclear war in 2019, and that civilization won't collapse.
Got that right. Yay! It’s always good to start of a set of predictions with a win.
- The US president as of December 2019 will be either Pence or Trump, most likely Trump. This isn't because I'm a Trump supporter, but for two reasons. One is that the US Senate is Republican, so they're not going to vote to impeach him. Also, the US looks like it can weather having incompetence-in-chief, so long as we don't get into a war, and since he's smart enough not to start a nuclear war, I don't think the US is going to get invaded. Rather, I think it's to a lot of peoples' advantages to dump liquid oxygen on the bonfire of his vanity, to make him unable to wage the struggle for reelection for 2020, even while he's roped into it as the only Republican candidate running.
To no one’s surprise, Trump’s still president.
- Hard, no-deal Brexit won't happen.
Oh how I wish I’d gotten this right, but after Thursday’s election it looks like English Nationalism looks to North Korea for a model for cordial international relations. Is this sour grapes for losing the empire? If so, please do look at how France has handled their transition with rather more aplomb (and rather more islands still under their control)
- There will be lots of disasters linked to climate change.
Such as the Midwest US floods, massive fires in Australia, methane bubbling from the east Siberian Sea…And I’ll make the same prediction for 2020.
- Nuclear fusion will be announced to be 30 years away at some point during the year.
Kinda maybe? The Lockheed baby fusion reactor of 2015 is on its fifth design, but still hasn’t gone live. On the bad side, this shows their initial PR was so much BS (sadly, no surprise). On the other hand, they’re actually working on it, which I guess is a good thing. If fusion’s going to play any role in dealing with climate change, we needed it 30 years ago, really.
- Suburban sprawl will largely stop in California and San Diego (this is part of the stuff I can't talk about further).
I was pretty sure, going into 2019, that this was a safe prediction, and I was largely correct. However, it’s not a permanent stop. What’s going on is that there were a bunch of highly problematic leapfrog sprawl developments. Most of them were approved, and the ones that were approved ended up in court as their opponents sued. There are three left in the queue, and they’ll all get the same treatment. But wasn’t unanimous. A couple of developments (notably Paradise Valley in Riverside County) were ignominiously shot down, after much work by the local environmentalists. Local county supervisors (courageously) stood up against the corruption that’s generally accompanied the sprawl. The problem with these projects is that they’ll gross hundreds of millions to billions of dollars at full build-out, so the cost of financing elections and other shenanigans is a minor part of the budget. Kudos to the Riverside Supervisors for doing the right thing regardless with Paradise Valley. I could only wish that the majority of San Diego Supervisors had similarly developed notochords. If we’re really lucky, the judges will say some good things about not putting people in harm’s way from fires and earthquakes. Perhaps their rulings (perhaps!) will make it harder to build in dangerous areas. I’m skeptical about this last, because right now the trend on hazard analyses is to lie and hope that your opponents can’t sue (or pay for the appeal) to make you tell the truth. Even if a judge sternly tells developers that they need to not hang people out to fry, they’ll likely do it anyway, while claiming on paper that they are not.
- A bunch of new bench-top battery technologies will be published and then disappear as someone tries to create them on a commercial scale.
Oddly enough, this article just showed up, so yes, and I didn’t even have to keep track of the battery news in 2019. This was a safe prediction.
- Something vital we thought was correct about social media and its inevitability will turn out to be completely untrue in scary ways.
Since I was silly enough to throw “inevitability” in here, I have no idea how to score this one. Definitely the glow is off social media, and I think a lot of us are getting sick about the “lock-in” it’s so far achieved with much of society.
- There will be a lot of politicking around the Green New Deal.
Yes, and I’m not going to link with it. While the words of the actual Green New Deal are quite inspiring, what I’ve seen about how it’s being organized on the ground…how do I say this?…could use some improvement. The other thing I could say is that I started out involved, then I stopped being involved, and I sincerely hope that what ultimately comes out is better than what I saw earlier this year.
- San Diego will start working on the third edition of their Climate Action Plan, as the first two have been thrown out by judges.
I was wrong. The County has so far lost five times on this in court, but they decided to appeal again rather than just getting on with writing a new plan. There’s a Forrest Gump saying that adequately covers their behavior.
- The 2019 rainy season will be drier than the 2018 season. And I'll wash my car a lot in the next few months too.
This one I can’t score. We’ve had more rain than we had at this time last year, which was extremely fortunate, as it ended the fire season before it could properly get going. The critical question was whether those few inches are all we’re going to get this rain year, or whether there’s more coming. It’s impossible to tell, as rain in southern California almost always comes in a few storms, and we get rain or not depending on whether the storms hit us or miss us.
Given that I’m a vocal pessimist, it’s kind of embarrassing that the stuff I got wrong, like the English doing Brexit and the behavior of the San Diego supervisors, was because I was too optimistic about their behavior. Guess I’ll have to work on that.
Happy holidays, all.
All promotional literature was designed and printed by the Scarfolk Advertising Agency, who, it was later revealed to the surprise of all clients concerned, had been working not only for the Conservative, but also the Labour and Liberal Parties.
Furthermore, the agency cleverly maximised its profits by selling exactly the same poster designs to all clients. Only the party name was changed. This made it difficult for voters to decide who to vote for, but it also confused politicians who became unsure which party they belonged to.



*See also: 'Trampvertising'.
Further reading: 'Watch Out! There's a Politician About' (1975), 'Voting isn't Working' election poster, 'Democracy Rationing', 'Put Old People Down at Birth' election pamphlet.
Parents and teachers assumed that the booklet was based on psychological research but it had no scientific basis whatsoever. The booklet's medically untrained author was one of the dinner ladies from the council canteen before she was fired for attempting to slip strychnine into bowls of blancmange.
Despite the scandal, the booklet remained on the school curriculum for many years and the author was invited by the council to pen an updated edition from her prison cell in 1979.


Apocalyptic toys were all the rage in the late 1970s, not that they were thought of as apocalyptic at the time. Citizens didn't fear their annihilation; they quite looked forward to demonstrating their 'Dunkirk spirit' with the misguided belief that it would somehow bring the country together. It didn't occur to them that their dogmatic nationalism might instead bring about the demise of the nation.
As the country moved toward collapse, social unrest and inevitable casualties increased. The paranoid state began anonymously exterminating citizens who so much as hinted at insurrection. Average (and the vast numbers of below-average) people were killed in street clashes between opposing factions and there were spates of frightened suicides.
Scar Toys exploited this expanding market opportunity and created a range of toys aimed at the many children in the process of being orphaned. One such toy, the Breath Mirror Set, aimed at young girls, was designed to accompany their more traditional beauty/vanity toys. The deluxe set (see picture above) included one mirror for each parent, colour-coded as per gender convention: pink for girls, blue for boys.
The wording on the back of the packaging encouraged children to use the mirrors beyond the death of their own parents. Included was a little booklet into which little pink stars could be affixed for every corpse that was identified using the mirrors. Highly sought-after prizes were awarded to the girls with the most stars and council archival documents reveal that the police turned a blind eye when gangs of little girls began slaughtering adults in frenzied attempts to accumulate more stars.
In 1993, id Software released the first-person shooter Doom, which quickly became a phenomenon. The game is now considered one of the most influential games of all time.
A decade after Doom's release, in 2003, journalist David Kushner published a book about id Software called Masters of Doom, which has since become the canonical account of Doom's creation. I read Masters of Doom a few years ago and don't remember much of it now, but there was one story in the book about lead programmer John Carmack that has stuck with me. This is a loose gloss of the story (see below for the full details), but essentially, early in the development of Doom, Carmack realized that the 3D renderer he had written for the game slowed to a crawl when trying to render certain levels. This was unacceptable, because Doom was supposed to be action-packed and frenetic. So Carmack, realizing the problem with his renderer was fundamental enough that he would need to find a better rendering algorithm, starting reading research papers. He eventually implemented a technique called "binary space partitioning," never before used in a video game, that dramatically sped up the Doom engine.
That story about Carmack applying cutting-edge academic research to video games has always impressed me. It is my explanation for why Carmack has become such a legendary figure. He deserves to be known as the archetypal genius video game programmer for all sorts of reasons, but this episode with the academic papers and the binary space partitioning is the justification I think of first.
Obviously, the story is impressive because "binary space partitioning" sounds like it would be a difficult thing to just read about and implement yourself. I've long assumed that what Carmack did was a clever intellectual leap, but because I've never understood what binary space partitioning is or how novel a technique it was when Carmack decided to use it, I've never known for sure. On a spectrum from Homer Simpson to Albert Einstein, how much of a genius-level move was it really for Carmack to add binary space partitioning to Doom?
I've also wondered where binary space partitioning first came from and how the idea found its way to Carmack. So this post is about John Carmack and Doom, but it is also about the history of a data structure: the binary space partitioning tree (or BSP tree). It turns out that the BSP tree, rather interestingly, and like so many things in computer science, has its origins in research conducted for the military.
That's right: E1M1, the first level of Doom, was brought to you by the US Air Force.
The VSD ProblemThe BSP tree is a solution to one of the thorniest problems in computer graphics. In order to render a three-dimensional scene, a renderer has to figure out, given a particular viewpoint, what can be seen and what cannot be seen. This is not especially challenging if you have lots of time, but a respectable real-time game engine needs to figure out what can be seen and what cannot be seen at least 30 times a second.
This problem is sometimes called the problem of visible surface determination. Michael Abrash, a programmer who worked with Carmack on Quake (id Software's follow-up to Doom), wrote about the VSD problem in his famous Graphics Programming Black Book:
I want to talk about what is, in my opinion, the toughest 3-D problem of all: visible surface determination (drawing the proper surface at each pixel), and its close relative, culling (discarding non-visible polygons as quickly as possible, a way of accelerating visible surface determination). In the interests of brevity, I'll use the abbreviation VSD to mean both visible surface determination and culling from now on.
Why do I think VSD is the toughest 3-D challenge? Although rasterization issues such as texture mapping are fascinating and important, they are tasks of relatively finite scope, and are being moved into hardware as 3-D accelerators appear; also, they only scale with increases in screen resolution, which are relatively modest.
In contrast, VSD is an open-ended problem, and there are dozens of approaches currently in use. Even more significantly, the performance of VSD, done in an unsophisticated fashion, scales directly with scene complexity, which tends to increase as a square or cube function, so this very rapidly becomes the limiting factor in rendering realistic worlds.1
Abrash was writing about the difficulty of the VSD problem in the late '90s, years after Doom had proved that regular people wanted to be able to play graphically intensive games on their home computers. In the early '90s, when id Software first began publishing games, the games had to be programmed to run efficiently on computers not designed to run them, computers meant for word processing, spreadsheet applications, and little else. To make this work, especially for the few 3D games that id Software published before Doom, id Software had to be creative. In these games, the design of all the levels was constrained in such a way that the VSD problem was easier to solve.
For example, in Wolfenstein 3D, the game id Software released just prior to Doom, every level is made from walls that are axis-aligned. In other words, in the Wolfenstein universe, you can have north-south walls or west-east walls, but nothing else. Walls can also only be placed at fixed intervals on a grid—all hallways are either one grid square wide, or two grid squares wide, etc., but never 2.5 grid squares wide. Though this meant that the id Software team could only design levels that all looked somewhat the same, it made Carmack's job of writing a renderer for Wolfenstein much simpler.
The Wolfenstein renderer solved the VSD problem by "marching" rays into the virtual world from the screen. Usually a renderer that uses rays is a "raycasting" renderer—these renderers are often slow, because solving the VSD problem in a raycaster involves finding the first intersection between a ray and something in your world, which in the general case requires lots of number crunching. But in Wolfenstein, because all the walls are aligned with the grid, the only location a ray can possibly intersect a wall is at the grid lines. So all the renderer needs to do is check each of those intersection points. If the renderer starts by checking the intersection point nearest to the player's viewpoint, then checks the next nearest, and so on, and stops when it encounters the first wall, the VSD problem has been solved in an almost trivial way. A ray is just marched forward from each pixel until it hits something, which works because the marching is so cheap in terms of CPU cycles. And actually, since all walls are the same height, it is only necessary to march a single ray for every column of pixels.
This rendering shortcut made Wolfenstein fast enough to run on underpowered home PCs in the era before dedicated graphics cards. But this approach would not work for Doom, since the id team had decided that their new game would feature novel things like diagonal walls, stairs, and ceilings of different heights. Ray marching was no longer viable, so Carmack wrote a different kind of renderer. Whereas the Wolfenstein renderer, with its ray for every column of pixels, is an "image-first" renderer, the Doom renderer is an "object-first" renderer. This means that rather than iterating through the pixels on screen and figuring out what color they should be, the Doom renderer iterates through the objects in a scene and projects each onto the screen in turn.
In an object-first renderer, one easy way to solve the VSD problem is to use a z-buffer. Each time you project an object onto the screen, for each pixel you want to draw to, you do a check. If the part of the object you want to draw is closer to the player than what was already drawn to the pixel, then you can overwrite what is there. Otherwise you have to leave the pixel as is. This approach is simple, but a z-buffer requires a lot of memory, and the renderer may still expend a lot of CPU cycles projecting level geometry that is never going to be seen by the player.
In the early 1990s, there was an additional drawback to the z-buffer approach: On IBM-compatible PCs, which used a video adapter system called VGA, writing to the output frame buffer was an expensive operation. So time spent drawing pixels that would only get overwritten later tanked the performance of your renderer.
Since writing to the frame buffer was so expensive, the ideal renderer was one that started by drawing the objects closest to the player, then the objects just beyond those objects, and so on, until every pixel on screen had been written to. At that point the renderer would know to stop, saving all the time it might have spent considering far-away objects that the player cannot see. But ordering the objects in a scene this way, from closest to farthest, is tantamount to solving the VSD problem. Once again, the question is: What can be seen by the player?
Initially, Carmack tried to solve this problem by relying on the layout of Doom's levels. His renderer started by drawing the walls of the room currently occupied by the player, then flooded out into neighboring rooms to draw the walls in those rooms that could be seen from the current room. Provided that every room was convex, this solved the VSD issue. Rooms that were not convex could be split into convex "sectors." You can see how this rendering technique might have looked if run at extra-slow speed in this video, where YouTuber Bisqwit demonstrates a renderer of his own that works according to the same general algorithm. This algorithm was successfully used in Duke Nukem 3D, released three years after Doom, when CPUs were more powerful. But, in 1993, running on the hardware then available, the Doom renderer that used this algorithm struggled with complicated levels—particularly when sectors were nested inside of each other, which was the only way to create something like a circular pit of stairs. A circular pit of stairs led to lots of repeated recursive descents into a sector that had already been drawn, strangling the game engine's speed.
Around the time that the id team realized that the Doom game engine might be too slow, id Software was asked to port Wolfenstein 3D to the Super Nintendo. The Super Nintendo was even less powerful than the IBM-compatible PCs of the day, and it turned out that the ray-marching Wolfenstein renderer, simple as it was, didn't run fast enough on the Super Nintendo hardware. So Carmack began looking for a better algorithm. It was actually for the Super Nintendo port of Wolfenstein that Carmack first researched and implemented binary space partitioning. In Wolfenstein, this was relatively straightforward because all the walls were axis-aligned; in Doom, it would be more complex. But Carmack realized that BSP trees would solve Doom's speed problems too.
Binary Space PartitioningBinary space partitioning makes the VSD problem easier to solve by splitting a 3D scene into parts ahead of time. For now, you just need to grasp why splitting a scene is useful: If you draw a line (really a plane in 3D) across your scene, and you know which side of the line the player or camera viewpoint is on, then you also know that nothing on the other side of the line can obstruct something on the viewpoint's side of the line. If you repeat this process many times, you end up with a 3D scene split into many sections, which wouldn't be an improvement on the original scene except now you know more about how different parts of the scene can obstruct each other.
The first people to write about dividing a 3D scene like this were researchers trying to establish for the US Air Force whether computer graphics were sufficiently advanced to use in flight simulators. They released their findings in a 1969 report called "Study for Applying Computer-Generated Images to Visual Simulation." The report concluded that computer graphics could be used to train pilots, but also warned that the implementation would be complicated by the VSD problem:
One of the most significant problems that must be faced in the real-time computation of images is the priority, or hidden-line, problem. In our everyday visual perception of our surroundings, it is a problem that nature solves with trivial east; a point of an opaque object obscures all other points that lie along the same line of sight and are more distant. In the computer, the task is formidable. The computations required to resolve priority in the general case grow exponentially with the complexity of the environment, and soon they surpass the computing load associated with finding the perspective images of the objects.2
One solution these researchers mention, which according to them was earlier used in a project for NASA, is based on creating what I am going to call an "occlusion matrix." The researchers point out that a plane dividing a scene in two can be used to resolve "any priority conflict" between objects on opposite sides of the plane. In general you might have to add these planes explicitly to your scene, but with certain kinds of geometry you can just rely on the faces of the objects you already have. They give the example in the figure below, where , , and are the separating planes. If the camera viewpoint is on the forward or "true" side of one of these planes, then evaluates to 1. The matrix shows the relationships between the three objects based on the three dividing planes and the location of the camera viewpoint—if object obscures object , then entry in the matrix will be a 1.

The researchers propose that this matrix could be implemented in hardware and re-evaluated every frame. Basically the matrix would act as a big switch or a kind of pre-built z-buffer. When drawing a given object, no video would be output for the parts of the object when a 1 exists in the object's column and the corresponding row object is also being drawn.
The major drawback with this matrix approach is that to represent a scene with objects you need a matrix of size . So the researchers go on to explore whether it would be feasible to represent the occlusion matrix as a "priority list" instead, which would only be of size and would establish an order in which objects should be drawn. They immediately note that for certain scenes like the one in the figure above no ordering can be made (since there is an occlusion cycle), so they spend a lot of time laying out the mathematical distinction between "proper" and "improper" scenes. Eventually they conclude that, at least for "proper" scenes—and it should be easy enough for a scene designer to avoid "improper" cases—a priority list could be generated. But they leave the list generation as an exercise for the reader. It seems the primary contribution of this 1969 study was to point out that it should be possible to use partitioning planes to order objects in a scene for rendering, at least in theory.
It was not until 1980 that a paper, titled "On Visible Surface Generation by A Priori Tree Structures," demonstrated a concrete algorithm to accomplish this. The 1980 paper, written by Henry Fuchs, Zvi Kedem, and Bruce Naylor, introduced the BSP tree. The authors say that their novel data structure is "an alternative solution to an approach first utilized a decade ago but due to a few difficulties, not widely exploited"—here referring to the approach taken in the 1969 Air Force study.3 A BSP tree, once constructed, can easily be used to provide a priority ordering for objects in the scene.
Fuchs, Kedem, and Naylor give a pretty readable explanation of how a BSP tree works, but let me see if I can provide a less formal but more concise one.
You begin by picking one polygon in your scene and making the plane in which the polygon lies your partitioning plane. That one polygon also ends up as the root node in your tree. The remaining polygons in your scene will be on one side or the other of your root partitioning plane. The polygons on the "forward" side or in the "forward" half-space of your plane end in up in the left subtree of your root node, while the polygons on the "back" side or in the "back" half-space of your plane end up in the right subtree. You then repeat this process recursively, picking a polygon from your left and right subtrees to be the new partitioning planes for their respective half-spaces, which generates further half-spaces and further sub-trees. You stop when you run out of polygons.
Say you want to render the geometry in your scene from back-to-front. (This is known as the "painter's algorithm," since it means that polygons further from the camera will get drawn over by polygons closer to the camera, producing a correct rendering.) To achieve this, all you have to do is an in-order traversal of the BSP tree, where the decision to render the left or right subtree of any node first is determined by whether the camera viewpoint is in either the forward or back half-space relative to the partitioning plane associated with the node. So at each node in the tree, you render all the polygons on the "far" side of the plane first, then the polygon in the partitioning plane, then all the polygons on the "near" side of the plane—"far" and "near" being relative to the camera viewpoint. This solves the VSD problem because, as we learned several paragraphs back, the polygons on the far side of the partitioning plane cannot obstruct anything on the near side.
The following diagram shows the construction and traversal of a BSP tree representing a simple 2D scene. In 2D, the partitioning planes are instead partitioning lines, but the basic idea is the same in a more complicated 3D scene.
Step One: The root partitioning line along wall D splits the remaining
geometry into two sets.
Step Two: The half-spaces on either side of D are split again. Wall C is the
only wall in its half-space so no split is needed. Wall B forms the new
partitioning line in its half-space. Wall A must be split into two walls since
it crosses the partitioning line.
A back-to-front ordering of the walls relative to the viewpoint in the
top-right corner, useful for implementing the painter's algorithm. This is just
an in-order traversal of the tree.
The really neat thing about a BSP tree, which Fuchs, Kedem, and Naylor stress several times, is that it only has to be constructed once. This is somewhat surprising, but the same BSP tree can be used to render a scene no matter where the camera viewpoint is. The BSP tree remains valid as long as the polygons in the scene don't move. This is why the BSP tree is so useful for real-time rendering—all the hard work that goes into constructing the tree can be done beforehand rather than during rendering.
One issue that Fuchs, Kedem, and Naylor say needs further exploration is the question of what makes a "good" BSP tree. The quality of your BSP tree will depend on which polygons you decide to use to establish your partitioning planes. I skipped over this earlier, but if you partition using a plane that intersects other polygons, then in order for the BSP algorithm to work, you have to split the intersected polygons in two, so that one part can go in one half-space and the other part in the other half-space. If this happens a lot, then building a BSP tree will dramatically increase the number of polygons in your scene.
Bruce Naylor, one of the authors of the 1980 paper, would later write about this problem in his 1993 paper, "Constructing Good Partitioning Trees." According to John Romero, one of Carmack's fellow id Software co-founders, this paper was one of the papers that Carmack read when he was trying to implement BSP trees in Doom.4
BSP Trees in DoomRemember that, in his first draft of the Doom renderer, Carmack had been trying to establish a rendering order for level geometry by "flooding" the renderer out from the player's current room into neighboring rooms. BSP trees were a better way to establish this ordering because they avoided the issue where the renderer found itself visiting the same room (or sector) multiple times, wasting CPU cycles.
"Adding BSP trees to Doom" meant, in practice, adding a BSP tree generator to the Doom level editor. When a level in Doom was complete, a BSP tree was generated from the level geometry. According to Fabien Sanglard, the generation process could take as long as eight seconds for a single level and 11 minutes for all the levels in the original Doom.5 The generation process was lengthy in part because Carmack's BSP generation algorithm tries to search for a "good" BSP tree using various heuristics. An eight-second delay would have been unforgivable at runtime, but it was not long to wait when done offline, especially considering the performance gains the BSP trees brought to the renderer. The generated BSP tree for a single level would have then ended up as part of the level data loaded into the game when it starts.
Carmack put a spin on the BSP tree algorithm outlined in the 1980 paper, because once Doom is started and the BSP tree for the current level is read into memory, the renderer uses the BSP tree to draw objects front-to-back rather than back-to-front. In the 1980 paper, Fuchs, Kedem, and Naylor show how a BSP tree can be used to implement the back-to-front painter's algorithm, but the painter's algorithm involves a lot of over-drawing that would have been expensive on an IBM-compatible PC. So the Doom renderer instead starts with the geometry closer to the player, draws that first, then draws the geometry farther away. This reverse ordering is easy to achieve using a BSP tree, since you can just make the opposite traversal decision at each node in the tree. To ensure that the farther-away geometry is not drawn over the closer geometry, the Doom renderer uses a kind of implicit z-buffer that provides much of the benefit of a z-buffer with a much smaller memory footprint. There is one array that keeps track of occlusion in the horizontal dimension, and another two arrays that keep track of occlusion in the vertical dimension from the top and bottom of the screen. The Doom renderer can get away with not using an actual z-buffer because Doom is not technically a fully 3D game. The cheaper data structures work because certain things never appear in Doom: The horizontal occlusion array works because there are no sloping walls, and the vertical occlusion arrays work because no walls have, say, two windows, one above the other.
The only other tricky issue left is how to incorporate Doom's moving characters into the static level geometry drawn with the aid of the BSP tree. The enemies in Doom cannot be a part of the BSP tree because they move; the BSP tree only works for geometry that never moves. So the Doom renderer draws the static level geometry first, keeping track of the segments of the screen that were drawn to (with yet another memory-efficient data structure). It then draws the enemies in back-to-front order, clipping them against the segments of the screen that occlude them. This process is not as optimal as rendering using the BSP tree, but because there are usually fewer enemies visible then there is level geometry in a level, speed isn't as much of an issue here.
Using BSP trees in Doom was a major win. Obviously it is pretty neat that Carmack was able to figure out that BSP trees were the perfect solution to his problem. But was it a genius-level move?
In his excellent book about the Doom game engine, Fabien Sanglard quotes John Romero saying that Bruce Naylor's paper, "Constructing Good Partitioning Trees," was mostly about using BSP trees to cull backfaces from 3D models.6 According to Romero, Carmack thought the algorithm could still be useful for Doom, so he went ahead and implemented it. This description is quite flattering to Carmack—it implies he saw that BSP trees could be useful for real-time video games when other people were still using the technique to render static scenes. There is a similarly flattering story in Masters of Doom: Kushner suggests that Carmack read Naylor's paper and asked himself, "what if you could use a BSP to create not just one 3D image but an entire virtual world?"7
This framing ignores the history of the BSP tree. When those US Air Force researchers first realized that partitioning a scene might help speed up rendering, they were interested in speeding up real-time rendering, because they were, after all, trying to create a flight simulator. The flight simulator example comes up again in the 1980 BSP paper. Fuchs, Kedem, and Naylor talk about how a BSP tree would be useful in a flight simulator that pilots use to practice landing at the same airport over and over again. Since the airport geometry never changes, the BSP tree can be generated just once. Clearly what they have in mind is a real-time simulation. In the introduction to their paper, they even motivate their research by talking about how real-time graphics systems must be able to create an image in at least 1/30th of a second.
So Carmack was not the first person to think of using BSP trees in a real-time graphics simulation. Of course, it's one thing to anticipate that BSP trees might be used this way and another thing to actually do it. But even in the implementation Carmack may have had more guidance than is commonly assumed. The Wikipedia page about BSP trees, at least as of this writing, suggests that Carmack consulted a 1991 paper by Chen and Gordon as well as a 1990 textbook called Computer Graphics: Principles and Practice. Though no citation is provided for this claim, it is probably true. The 1991 Chen and Gordon paper outlines a front-to-back rendering approach using BSP trees that is basically the same approach taken by Doom, right down to what I've called the "implicit z-buffer" data structure that prevents farther polygons being drawn over nearer polygons. The textbook provides a great overview of BSP trees and some pseudocode both for building a tree and for displaying one. (I've been able to skim through the 1990 edition thanks to my wonderful university library.) Computer Graphics: Principles and Practice is a classic text in computer graphics, so Carmack might well have owned it.
Still, Carmack found himself faced with a novel problem—"How can we make a first-person shooter run on a computer with a CPU that can't even do floating-point operations?"—did his research, and proved that BSP trees are a useful data structure for real-time video games. I still think that is an impressive feat, even if the BSP tree had first been invented a decade prior and was pretty well theorized by the time Carmack read about it. Perhaps the accomplishment that we should really celebrate is the Doom game engine as a whole, which is a seriously nifty piece of work. I've mentioned it once already, but Fabien Sanglard's book about the Doom game engine (Game Engine Black Book: DOOM) is an excellent overview of all the different clever components of the game engine and how they fit together. We shouldn't forget that the VSD problem was just one of many problems that Carmack had to solve to make the Doom engine work. That he was able, on top of everything else, to read about and implement a complicated data structure unknown to most programmers speaks volumes about his technical expertise and his drive to perfect his craft.
If you enjoyed this post, more like it come out every four weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.
Previously on TwoBitHistory…
I've wanted to learn more about GNU Readline for a while, so I thought I'd turn that into a new blog post. Includes a few fun facts from an email exchange with Chet Ramey, who maintains Readline (and Bash):https://t.co/wnXeuyjgMx
— TwoBitHistory (@TwoBitHistory) August 22, 2019
-
Michael Abrash, "Michael Abrash's Graphics Programming Black Book," James Gregory, accessed November 6, 2019, http://www.jagregory.com/abrash-black-book/#chapter-64-quakes-visible-surface-determination. ↩
-
R. Schumacher, B. Brand, M. Gilliland, W. Sharp, "Study for Applying Computer-Generated Images to Visual Simulation," Air Force Human Resources Laboratory, December 1969, accessed on November 6, 2019, https://apps.dtic.mil/dtic/tr/fulltext/u2/700375.pdf. ↩
-
Henry Fuchs, Zvi Kedem, Bruce Naylor, "On Visible Surface Generation By A Priori Tree Structures," ACM SIGGRAPH Computer Graphics, July 1980. ↩
-
Fabien Sanglard, Game Engine Black Book: DOOM (CreateSpace Independent Publishing Platform, 2018), 200. ↩
-
Sanglard, 206. ↩
-
Sanglard, 200. ↩
-
David Kushner, Masters of Doom (Random House Trade Paperbacks, 2004), 142. ↩

Many readers will remember the two packs of Horror Top Trumps, which were first issued in 1978. What is not commonly known is that the first pack was recalled after 3 days only to be rereleased a month later minus one card: The Scarfolk card.
The card had proved so effective that, not only could it effortlessly beat every other card, it also killed the losing player within moments of the game ending.
Learning of the inexplicable power of the card, the government immediately issued the recall, albeit not in the interest of public safety. Instead, it coerced citizens on welfare into playing the game during home assessment visits. The government also targeted enemies of the state, using the card in so-called 'black operations' at home and abroad.
In 1979, a catastrophe was narrowly avoided when the Scarfolk card was played in a game opposite a forgery of itself. Fortunately, the game's location was sparsely populated and the only victims of the resulting dark-matter explosion were a government agent, an unknown dissenter, seven ducks and, less significantly, four coachloads of orphans* who were driven to the remote site for reasons unknown.
*The orphans were children of disgraced artists, academics and other intellectuals who disappeared during the New Truth Purges of September 1977**.
** Edit: Apparently, according to fresh information, no such purges took place.
Happy Halloween/Samhain from everyone at Scarfolk Council.
The Scarfolk Annual 197x.
OUT NOW(US/Can: 10.29.2019)
Available from :Amazon (http://bit.ly/scarfolkbook), Hive, Waterstones, The Guardian Bookshop, Foyles, Wordery, Blackwells, Forbidden Planet, Barnes & Noble, Books-A-Million & others.
For more information please reread.
Just a quick note for those who, like me, need to fiddle for a few hours while the world burns. Oh wait, that’s not quite what I meant, but anyway, if you want a distraction, here’s one: the Younger Dryas Impact Hypothesis.
The basic idea, as noted in the Wikipedia link above, is that around 12,800 years ago, a bolide either fragmented above the Earth in sort of a super-Tunguska, or an asteroid hit (possibly under the Hiawatha glacier in Greenland, near where the Cape York meteorite was found. And yes, possibly the Cape York fragments are part of it). I’m personally partial to an asteroid strike because one of the (to me) more solid lines of evidence is a spike in platinum around the world dating from around 12,800 BP, found most recently in Africa, but basically on every continent except Antarctica.
This hypothesis is controversial of course–it should be, given the way normal science works. But I think it does clear up some mysteries. For example, it may explain why the megafaunal extinction happened around then in America and norther Eurasia, and not thousands of years earlier.
Anatomically modern humans were around for at least 300,000 years, and we evidently tried agriculture around 22,000 years ago near what’s now the Dead Sea. While people like to hypothesize that ancient humans were more primitive than moderns, and that’s why they stayed few in number and simple in lifestyle, but I disagree. I personally think that the reason that humans didn’t take over the Earth hundreds of thousands of years ago was that the climate in the ice age fluctuated too radically to allow the rise of civilization. There’s little point in depending on crops if they fail most years.
Anyway, during those 300,000 years, humans lived alongside big animals (megafauna), except in the Americans (settled 10,000-20,000 years ago), in Australia (settled 65,000 years ago) and New Zealand and the Pacific (settled less than 3,000 years ago–we’ll ignore this for now). My personal hypothesis before I started thinking about the Younger Dryas Impact Hypothesis was that megafaunal extinctions were due to human predation and habitat change. While that’s unambiguously true in the Polynesian Islands and Madagascar (which I hate saying), it’s not clear what happened in Australia and the Americas. In Australia, the aboriginal population first settled around 65,000 years BP, but the megafaunal die off happened “rapidly thereafter”(per the biologists) starting around 46,000 years BP. This is a classic example of why biologists need to do more math. 19,000 years of coexistence is NOT rapid. Similarly in the Americas, humans lived alongside the megafauna for at least 2,000 years, if not 8,000 years, before the megafaunal extinction started “rapidly” happening. We don’t blame Europeans or Asians for wiping out their mammoths and other megafauna (do you ever hear the Chinese criticized for wiping out the elephants and rhinos around Beijing 3,000 years ago? That was considerably more rapid.). That’s why I agree with the Native Americans and Aboriginals who say that accusations of ancient ecocide are just veiled neo-colonial attempts to justify taking their land. They’re right: thousands of years of coexistence is not a short time.
And that leaves the Younger Dryas Impact. If it happened, it presumably did not play a role in the Australian megafaunal extinction (it’s around 33,000 years too late), but it could have played a major role in the megafaunal extinctions in the northern hemisphere, and possibly into South America. All that platinum had to come from somewhere.
One criticism leveled against the impact hypothesis is questioning why the proposed impact only killed big animals, not little ones. That’s easily answered, at least if you believe Anthony Martin, author of The Evolution Underground: Burrows, Bunkers, and the Marvelous Subterranean World Beneath our Feet (BigMuddy Link, in case you want to read this really fun book). He makes a point that during extinction events, mass or otherwise, animals that can shelter underground survive disproportionately well. So if a smallish asteroid struck, especially during northern winter, it would harm everything living above the surface (e.g. the megafauna) but animals hunkered down in burrows, especially under the snow, would be proportionally less affected. That’s not quite what we see, as things like bison and moose survived the possible impact, but it’s a reasonable hypothesis that can be tested.
Anyway, you can want to dive down the rabbit hole for shelter, you can waste happy hours on something other than obsessing about national meltdowns in the US or UK. That’s one reason I’m posting this.
The other reason to post is that I don’t know of much, well any, alt-history SF that explores worlds where the impact didn’t happen and the megafauna of the Americas and Eurasia didn’t go extinct 12,800 years ago. As an alt-history, the changes are rather subtle, more about setting than plot, in a No Younger Dryas (NYD) world. But they could be fun.
I’m pretty sure that agriculture and civilization would have arisen in NYD as they did in our timeline, although possibly 1,000 years or more earlier (the Younger Dryas lasted around 1,200 years). There are multiple reasons for this confidence:
- Agriculture arose in West Africa, Ethiopia, China, India, and possibly Southeast Asia in places where there were lots of megafauna (elephants, rhinos, lions, tigers, etc.), so having big herbivores around does not preclude people inventing agriculture.
- Someone tried agriculture back during the preceding ice age at least once that we know of, and that was with a full panoply of biggish critters around. They most likely failed due to climate change, not rampaging mammoths.
What would be different in a NYD world is that mammoths, rhinos, cave lions, sabertooths, and all that ilk would either be present in modern times or recently extinct in civilized lands. This would be particularly true in the Americas, if only because the classical Mediterranean civilizations, the Medieval Europeans, and the Chinese were all pretty darn good at getting rid of their megafauna. Colonizing the New World would have been a bit more like attempts to colonize Africa than what actually happened, with the Hudson’s Bay Company equivalent trading as much in mammoth or mastodon ivory as in beaver furs, and livestock kept at night in kraals of, perhaps, spiny osage orange branches or similar, to keep the lions away.*
Anyway, it’s something for creatives to play with, if they want to distract themselves from the current chaos. Heck, you could combine NYD with the Alt-Chinese colonizing (or attempting to colonize) the west coast of North America and introducing iron-working, first generation firearms, and a full complement of Old World diseases to the peoples of the Pacific Coast. That would make things much, much weirder, especially if the Europeans colonized the east coast of the Americas centuries later in timeline, so that both the diseases and the technologies had their chance to rampage around the continent.
Have fun!
*Actually, there’s a whole post I could write about beavers as ecological engineers and about how their loss from the US just prior to European settlement has given us a really distorted idea of how this continent is supposed to work. Maybe later.






