Weblogs: All the news that fits
02-Jan-25
Ass Man. [ 28-Nov-24 5:14pm ]

Which is what we called "Asset Management" behind the scenes, while I was writing it. Of course, back then I didn't know I was writing it for Keanu fucking Reeves.

Apparently they mentioned my name on some ComicCon panel about a month ago, which effectively lifted the embargo ("Brag #2" was as far as I could go back in September). Now they've released a couple of meaty clips over on YouTube. I am—I suppose the word is psyched.

You can see me in there, kind of. My dialog didn't make it through unscathed, because "Asset Management" incorporated specific game/plot elements that were declared off-limits post hoc (not really sure why; spoilers can't be much of of an issue for a game that came out over a year ago). The folks at Blur had to strip out a lot of details to make that work; they changed the ending entirely. But—and you don't see authors admit this very often, so take it to heart—I think their changes were for the better. I'd been writing within the constraints not just of Rubicon the world, but also of Armored Core VI's specific plot. Those constraints forced the story into a specific and complex mold. Unbound, though, it could assume a simpler and more natural shape.

So the dialog in one of these clips expresses the essence of what I wrote with slightly different verbiage. The dialog in the other is pretty much all me, but there's only one line of it so that's not saying much. The mech, as Keanu mentions, is called Shrieker— but that's just a phonetic simplification of a much more awkward spelling, with much more world-changing implications. I look forward to seeing whether we get to see the actual name stamped onto a breastplate or something.

A Boy and His Mech

Because I still haven't seen the whole thing, you see. Blur tried to slip me a copy, but Amazon was only willing to let it stream via a service that only supports two of the most pernicious and surveillant browsers on the planet (Chrome and Edge). It wouldn't play nicely with Firefox, Brave, and/or their associated privacy plug-ins— and there was no way I was gonna let those guards down. Hell, Amazon's one of the reasons I have them up in the first place.

A Mech and Its Boy.

So once you check out the clips you'll have seen all I have. But the good news is, those moments are totally in sync with the vibe I was going for when I wrote the treatment. And the changes that make any difference at all are improvements.

For a proper assessment, of course, I'll just have to wait until December 10th with the rest of you. Steal the ep when it drops.

You and me both, Keanu. In it to the end.

"In 1933, people were not fooled by propaganda. They elected a leader who openly disclosed his plans with great clarity. The Germans elected me… ordinary people who chose to elect an extraordinary man, and entrust the fate of the country to him.

“What do you want to do, Sawatzki? Ban elections?"

—Adolph Hitler, "Er Ist Wieder Da" (2015)

Pete Townshend's prayer has gone unanswered.

The US Dollar is "surging" even as I type, along with shares in Bitcoin and Tesla. Wall Street opened at an all-time high. The world's leaders, scared shitless, scramble over each other to fall in line and congratulate the new/old boss on his resurrection. The world's tyrants and shitstains—Orban, Netanyahu, Bolsonaro— were of course at the front of the line[1], but even vacuous ineffectual bobbleheads like Canada's own PM jump up and down like eager puppies, reminding Dear Leader of a friendship "united by a shared history, common values and steadfast ties between our peoples". (And, as the BUG astutely remarked, wouldn't it be interesting to see the alternate statements all these folks had doubtless prepared in the event that the merely mediocre candidate had triumphed over the apocalyptically bad one?)

Trump may not have spoken with the "great clarity" that Hitler commanded, but it's not as if he hid his agenda. He won anyway, handily. I guess you really do get the government you deserve.

And yet, for all Trump's hateful narcissistic idiocy, he's not really the problem. Trump is merely a symptom. The problem is a political system that rewards the world's Trumps with massive power and influence, instead of marginalizing them. A number of left-leaning 'Murricans—most, I'd wager—seem to regard their homeland as a great nation, a shining experiment in liberty and democracy, that somehow lost its way. I tend to be less charitable. The US is a nation literally founded on invasion, slavery, and germ warfare—which is to say, it was born in the universal reality of human beings fucking each other over for a percentage. It is a global case-in-point of Homo so-called sapiens belying its own self-aggrandizing myths and behaving exactly like the social mammals we are: short-sighted apes, prioritizing the approval of the tribe over long-term consequences our intellects grasp dimly at best, and our guts reject outright. Facts don't matter. Truth is not the survival trait. Conformity is: defense of the tribe, hatred of The Other.

U S A. U S A.

Is there hope? Perhaps a little: ironically, in the crushing of hope. Because despite what the hopepunks have been wittering on about all these years—despite the bromides about how we should stop writing dystopias and never say 1.5° is 'unachievable' because to do so would give in to paralyzing narratives of hopelessness and despair—we have evidence that the exact opposite is true. Ballew et al over at Nature report that "Climate change psychological distress is associated with increased collective climate action". People who are bummed out and distressed about the state of the environment are more likely to get off their asses and do something than are all those cozy optimists who tell us to put on a happy face because We Live In The Greatest Nation On Earth and Things Will Work Out Somehow. Ballew et al show, at high levels of significance, that the Hope Police have their heads up their asses.

This study, admittedly, did not explicitly deal with Gileads-in-the-making, with the unchecked power of demagogues or the stripping away of Human rights. It deals with our reaction to environmental havoc—and even if that's all it applies to, I'm cool with it. I personally am more concerned about ecocide than Human rights. Treating each other like shit at least keeps our sins in the family—and frankly, given the contempt with which we treat the rest of the biosphere, I'm skeptical that Humans even deserve "rights". Certainly, Trump's victory has pretty much incinerated whatever faint hope we might still have had of turning things around environmentally. The world was already circling the toilet bowl on that front (and would doubtless have continued in that grim decaying orbit even if Harris had prevailed); Trump's victory flushes it down for good.

I'd be very surprised, though, if activism was catalyzed only by distress of the environmental kind. Almost certainly, it also maps onto the social distress currently being experienced even by all those Democrats who, according to polls and campaign priorities, really don't give a shit about climate change (at least, not relative to "the economy" or "the border"—remember how Harris flip-flopped on the whole fracking thing to appeal to her tribe?) Hell, how may pundits have already attributed Trump's victory to the distress-induced activism of Magats who found themselves paying too much for groceries?

So maybe this will be the impetus for something. Maybe "the resistance" will be more than an ineffectual Star Wars call-out this time around.

Maybe. But I'm gonna start pitching in on building this Wall anyway.

And on the plus side, all those environmental dystopias I wrote back in the day are looking increasingly prophetic. Should give my street cred a bit of a boost, until the book banners burn through all that gender smut and start looking further afield…


  1. Except for Putin, curiously. Putin seems to be staying relatively quiet. Almost as if he doesn't need to kowtow, because he knows he'll get what he wants without any embarrassing displays of self-abasement…

A Blast from the Past:

Arpanet.

Internet.

The Net. Not such an arrogant label, back when one was all they had.

Cyberspace lasted a bit longer— but space implies great empty vistas, a luminous galaxy of icons and avatars, a hallucinogenic dreamworld in 48-bit color. No sense of the meatgrinder in cyberspace. No hint of pestilence or predation, creatures with split-second lifespans tearing endlessly at each others’ throats. Cyberspace was a wistful fantasy-word, like hobbit or biodiversity, by the time Achilles Desjardins came onto the scene.

Onion and metabase were more current. New layers were forever being laid atop the old, each free—for a while—from the congestion and static that saturated its predecessors. Orders of magnitude accrued with each generation: more speed, more storage, more power. Information raced down conduits of fiberop, of rotazane, of quantum stuff so sheer its very existence was in doubt. Every decade saw a new backbone grafted onto the beast; then every few years. Every few months. The endless ascent of power and economy proceeded apace, not as steep a climb as during the fabled days of Moore, but steep enough.

And coming up from behind, racing after the expanding frontier, ran the progeny of laws much older than Moore’s.

It’s the pattern that matters, you see. Not the choice of building materials. Life is information, shaped by natural selection. Carbon’s just fashion, nucleic acids mere optional accessories. Electrons can do all that stuff, if they’re coded the right way.

It’s all just Pattern.

And so viruses begat filters; filters begat polymorphic counteragents; polymorphic counteragents begat an arms race. Not to mention the worms and the ‘bots and the single-minded autonomous datahounds—so essential for legitimate commerce, so vital to the well-being of every institution, but so needy, so demanding of access to protected memory. And way over there in left field, the Artificial Life geeks were busy with their Core Wars and their Tierra models and their genetic algorithms. It was only a matter of time before everyone got tired of endlessly reprogramming their minions against each other. Why not just build in some genes, a random number generator or two for variation, and let natural selection do the work?

The problem with natural selection, of course, is that it changes things.

The problem with natural selection in networks is that things change fast.

By the time Achilles Desjardins became a ‘Lawbreaker, Onion was a name in decline. One look inside would tell you why. If you could watch the fornication and predation and speciation without going grand mal from the rate-of-change, you knew there was only one word that really fit: Maelstrom.

Of course, people still went there all the time. What else could they do? Civilization’s central nervous system had been living inside a Gordian knot for over a century. No one was going to pull the plug over a case of pinworms.

—Me, Maelstrom, 2001

*

Ah, Maelstrom. My second furry novel. Hard to believe I wrote it almost a quarter-century ago.

Maelstrom combined cool prognostications with my usual failure of imagination. I envisioned programs that were literally alive— according to the Dawkinsian definition of Life as "Information shaped by natural selection"—and I patted myself on the back for applying Darwinian principles to electronic environments. (It was a different time. The phrase "genetic algorithm" was still shiny-new and largely unknown outside academic circles).

I confess to being a bit surprised—even disappointed—that things haven't turned out that way (not yet, anyway). I'll grant that Maelstrom's predictions hinge on code being let off the leash to evolve in its own direction, and that coders of malware won't generally let that happen. You want your botnets and phishers to be reliably obedient; you're not gonna steal many identities or get much credit card info from something that's decided reproductive fitness is where it's at. Still, as Michael Caine put it in The Dark Knight: some people just want to watch the world burn. You'd think that somewhere, someone would have brought their code to life precisely because it could indiscriminately fuck things up.

Some folks took Maelstrom's premise and ran with it. In fact, Maelstrom seems to have been more influential amongst those involved in AI and computer science (about which I know next to nothing) than Starfish ever was among those who worked in marine biology (a field in which I have a PhD). But my origin story for Maelstrom's wildlife was essentially supernatural. It was the hand of some godlike being that brought it to life. We were the ones who gave mutable genes to our creations; they only took off after we imbued them with that divine spark

It never even occurred to me that code might learn to do that all on its own.

Apparently it never occurred to anyone. Simulation models back then were generating all sorts of interesting results (including the spontaneous emergence of parasitism, followed shortly thereafter by the emergence of sex), but none of that A-Life had to figure out how to breed; their capacity for self-replication was built in at the outset.

Now Blaise Agüera y Arcas and his buddies at Google have rubbed our faces in our own lack of vision. Starting with a programming language called (I kid you not) Brainfuck, they built a digital "primordial soup" of random bytes, ran it under various platforms, and, well…read the money shot for yourself, straight from the (non-peer-reviewed) ArXiv preprint "Computational Life: How Well-formed, Self-replicating Programs Emerge from Simple Interaction"[1]:

"when random, non self-replicating programs are placed in an environment lacking any explicit fitness landscape, self-replicators tend to arise. … increasingly complex dynamics continue to emerge following the rise of self-replicators."

Apparently, self-replicators don't even need random mutation to evolve. The code's own self-modification is enough to do the trick. Furthermore, while

"…there is no explicit fitness function that drives complexification or self-replicators to arise. Nevertheless, complex dynamics happen due to the implicit competition for scarce resources (space, execution time, and sometimes energy)."

For those of us who glaze over whenever we see an integral sign, Arcas provides a lay-friendly summary over at Nautilus, placed within a historical context running back to Turing and von Neumann.

But you're not really interested in that, are you? You stopped being interested the moment you learned there was a computer language called Brainfuck: that's what you want to hear about. Fine: Brainfuck is a rudimentary coding language whose only mathematical operations are "add 1" and "subtract 1". (In a classic case of understatement, Arcas et al describe it as "onerous for humans to program with".) The entire language contains a total of ten commands (eleven if you count a "true zero" that's used to exit loops). All other characters in the 256 ASCII set are interpreted as data.

So. Imagine two contiguous 64-byte strings of RAM, seeded with random bytes. Each functions as a Brainfuck program, each byte interpreted as either a command or a data point. Arcas et al speak of

"the interaction between any two programs (A and B) as an irreversible chemical reaction where order matters. This can be described as having a uniform distribution of catalysts a and b that interact with A and B as follows:

Which as far as I can tell boils down to “a” catalyzes the smushing of programs A and B into a single long-string program, which executes and alters itself in the process; then the “split” part of the equation cuts the resulting string back into two segments of the initial A and B lengths.

You know what this looks like? This looks like autocatalysis: the process whereby the product of a chemical reaction catalyzes the reaction itself. A bootstrap thing. Because this program reads and writes to itself, the execution of the code rewrites the code. Do this often enough, and one of those 64-byte strings turns into a self-replicator.

The Origin of Life

It doesn't happen immediately; most of the time, the code just sits there, reading and writing over itself. It generally takes thousands, millions of interactions before anything interesting happens. Let it run long enough, though, and some of that code coalesces into something that breeds, something that exchanges information with other programs (fucks, in other words). And when that happens, things really take off: self-replicators take over the soup in no time.

What's that? You don't see why that should happen? Don't worry about it; neither do the authors:

"we do not yet have a general theory to determine what makes a language and environment amenable to the rise of self-replicators"

They explored the hell out of it, though. They ran their primordial soups in a whole family of "extended Brainfuck languages"; they ran them under Forth; they tried them out under that classic 8-bit ZX-80 architecture that people hold in such nostalgic regard, and under the (almost as ancient) 8080 instruction sets. They built environments in 0, 1, and 2 dimensions. They measured the rise of diversity and complexity, using a custom metric— "High-Order Entropy"— describing the difference between "Shannon Entropy" and "normalized Kolmogorov Complexity" (which seems to describe the complexity of a system that remains once you strip out the amount due to sheer randomness[2]).

They did all this, under different architectures, different languages, different dimensionalities—with mutations and without—and they kept getting replicators. More, they got different kinds of replicators, virtual ecosystems almost, competing for resources. They got reproductive strategies changing over time. Darwinian solutions to execution issues, like "junk DNA" which turns out to serve a real function:

"emergent replicators … tend to consist of a fairly long non-functional head followed by a relatively short functional replicating tail. The explanation for this is likely that beginning to execute partway through a replicator will generally lead to an error, so adding non-functional code before the replicator decreases the probability of that occurrence. It also decreases the number of copies that can be made and hence the efficiency of the replicator, resulting in a trade-off between the two pressures."

I mean, that looks like a classic evolutionary process to me. And again, this is not a fragile phenomenon; it's robust across a variety of architectures and environments.

But they're still not sure why or how.

They do report one computing platform (something called SUBLEQ) in which replicators didn't arise. They suggest that any replicators which could theoretically arise in SUBLEQ would have to be much larger than those observed in other environments, which they suggest could be a starting point towards developing "a theory that predicts what languages and environments could harbor life". I find that intriguing. But they're not even close to developing such a theory at the moment.

Self-replication just—happens.

It's not an airtight case. The authors admit that it would make more sense to drill down on an analysis of substrings within the soup (since most replicators are shorter than the 64-byte chunks of code the used), but because that's "computationally intractable" they settle for "a mixture of anecdotal evidence and graphs"—which, if not exactly sus, doesn't seem especially rigorous. At one point they claim that mutations speed up the rise of self-replicators, which doesn't seem to jibe with other results suggesting that higher mutation rates are associated with a slower emergence of complexity. (Granted "complexity" and "self-replicator" are not the same thing, but you'd still expect a positive correlation.) As of this writing, the work hasn't yet been peer-reviewed. And finally, a limitation not of the work but of the messenger: you're getting all this filtered through the inexpert brain of a midlist science fiction writer with no real expertise in computer science. It's possible I got something completely wrong along the way.

Still, I'm excited. Folks more expert than I seem to be taking this seriously. Hell, it even inspired Sabine Hossenfelder (not known for her credulous nature) to speculate about Maelstromy scenarios in which wildlife emerges from Internet noise, "climbs the complexity ladder", and runs rampant. Because that's what we're talking about here: digital life emerging not from pre-existing malware, not from anarchosyndicalist script kiddies—but from simple, ubiquitous, random noise.

So I'm hopeful.

Maybe the Internet will burn after all.


  1. The paper cites Tierra and Core Wars prominently; it's nice to see that work published back in the nineties is still relevant in such a fast-moving field. It's even nicer to be able to point to those same call-outs in Maelstrom to burnish my street cred.

  2. This is a bit counterintuitive to those of us who grew up thinking of entropy as a measure of disorganization. The information required to describe a system of randomly-bumping gas molecules is huge because you have to describe each particle individually; more structured systems—crystals, fractals—have lower entropy because their structure can be described formulaically. The value of "High-order" Entropy, in contrast, is due entirely to structural, not random, complexity; so a high HEE means more organizational complexity, not less. Unless I'm completely misreading this thing.

09-Nov-24
Joi Ito's Web [ 9-Nov-24 2:56am ]
Morning Thick Tea and Yuen [ 09-Nov-24 2:56am ]

IMG_0364.jpeg
Time :5:30
Scroll :"yuen" by Kitaro Nishida
Bowl:9th Ohi Chozaemon
Tea: Hoshinoen "Hojyu"

Had a nice bowl of thick tea this chilly morning.

Nishida is the founder of the Kyoto School of Philosophy and one of the founding members of the Chiba Institute of Technology. The scrolls says "yuen" and it means far and distant in time and space or eternity.

Tea practice has made me much more aware of time - many of the utensils we use are hundreds of years old, and the scrolls and the utensils we use will likely continue to be used for hundreds of years. Hold an ancient bowl; one can imagine when it was made, the people who handled it, and the society and history surrounding them. Then, it is easy to imagine the bowl in the future and the people and cultures they will live in. Then stretch and keep pushing time in the past and the future until you envision eternity.

14-Oct-24
The Three-Bragger Problem [ 26-Sep-24 1:42am ]

Preamble: OK, I lied. Said last time that this time was gonna be about science, and no more of this fluffy promo bullshit. And I meant well. But this time, the fluffy promo is about me. And it's cool. And more to the point, I don't have to spend hours doing research on things I don't actually know much about, so it's fast. When you're writing to deadline, fast is good.

Brag 1.

So check this out:

I have a small hand in this universe. I'm developing some of its Lore. I am not allowed to give you any details, but none of you will be surprised to learn that said Lore contains certain, shall we say, Darwinian elements.

I think it's going to rock.

Brag 2.

There’s something I'm allowed to share even fewer details about than EVE. In fact, I'm not even allowed to say that I am involved in it, although the trailer dropped earlier than EVE’s and the curtain rises sooner. I am allowed to paste promo copy from a certain corporate entity about a certain project—in effect, to place someone else's generic ad copy onto the 'Crawl without explaining its relevance. This is an opportunity I must regretfully decline.

A shame, though. From what I've seen, it's gonna be awesome. Stay tuned.

Brag 3.

And this—this—may be the least significant item in terms of pop culture, but it is, by far, the closest to my heart. It is an honor that generally accrues only to the likes of Gary Larson, Greta Thunberg, and Radiohead.

Niko Kasalo & Josip Skejo have named a tribe of Australian Pygmy Grasshoppers after me.

I mean, not me personally, but one of my novels. The Tribe is Echopraxiini; the genus is Echopraxia; the species is E. Hasenpuschi. And should any of you point out that correlation is not causation, and that there might be any number of reasons why someone might name a taxon after the neurological malady without even knowing about my novel, I've got you covered:

The paper in its entirety is paywalled, but I have uploaded a copy for your forensic edification just in case you think I'm full of shit. After reading it you may still conclude that I faked the whole manuscript, which offhand I cannot disprove. But if I did, you gotta admit I did a bang-up job.

Anyway: now you have some idea of the stuff I've been doing when I haven’t been writing Echopraxia Omniscience. I haven't just been lying around jerking off all these months.

Well, not exclusively.

Next up: science. Definitely[1].


  1. Probably.

24-Sep-24
Mondo 2000 [ 23-Sep-24 11:34pm ]

R.U. Sirius & Shira Chess are pleased (and a little
frightened) to announce that we have contracted with
Strange Attractor Press to publish Freaks in the
Machine: Mondo 2000 in Late 20th Century Tech
Culture. With a forward by Grant Morrison.

Before the entire world was online, before it was divided
into social media enclaves and corporate-sanctioned
“likes,” before even WIRED magazine, there was Mondo
2000. Published from 1989-1997, Mondo reached a peak
distribution of about 100,000, but the magazine was even
more influential than its distribution implied. M2K was
raved about by 90s media outlets, consumed by early tech
culture, and demonstrated a transition from the literary
cyberpunk style of Gibson and Vinge into an aspirational
cyberpunk aesthetic that leaked into the material world. In
1994, Douglas Rushkoff described Mondo as the "voice of
cyberculture."

Published by a rag-tag team of psychonauts and
counterculture weirdos, and based out of a Berkeley Hills
mini-mansion, the parties and strange goings-on were
almost as legendary as the magazine itself. The Mondo
publishers — editor-in-chief R.U. Sirius and Domineditrix
Queen Mu — were not technologists. Yet their Bay Area
publication created a desire for the technological zeitgeist
of the coming millennium. In turn, the high (and sometimes
low) weirdness of Mondo established a kaleidoscopic onramp
that rocketed a lot of early adaptor mutants (or
"Mondoids") onto the so-called “information superhighway”
of the late 90s and early 00s.

Part memoir, part history, and part critical analysis, Freaks
in the Machine tells the storied adventures of the
magazine’s tumultuous history, its strange cast of
characters and its irreverent content.

"Mondo was the next bold stage in an evolutionary advance where street
cred and an underground ethos would come with the potential to appeal to
a mainstream audience, to wake up the straight world, announcing its
arrival like a flare over the horizon so everyone could see and know this
was the signal they'd been waiting for." -Grant Morrison

R.U. Sirius is still best known as the editor-in-chief of the
great 1990s cyberculture magazine Mondo 2000 although
some historians insist that he should be most remembered
for his skills as a second baseman in the West Islip New
York little league while others have raved about the
infrequent and disturbing live appearances of the band
Mondo Vanilli. [Note from Shira: There were no historians.]
He has written for Time, Rolling Stone, Salon, WIRED and
a bunch of other publications that are stored on the tip of
his tongue. Books include Counterculture Through The
Ages (with Dan Joy), Design for Dying (with Timothy
Leary) and Cyberpunk Handbook (with St. Jude & Bart
Nagel).
Shira Chess is an Associate Professor in Entertainment &
Media Studies at the University of Georgia and a
recovering game studies scholar. She is the author of
several books on digital culture and video games,
including the forthcoming MIT Press book The Unseen
Internet: Conjuring the Occult in Digital Discourse. She
joins this project to add context and history, to referee any
nonsense, to try to make sense of Sirius's lunatic ravings,
and to excise any excesses of cringe.

The post Freaks in the Machine: Mondo 2000 in Late 20th Century Tech Culture. appeared first on Mondo 2000.

11-Sep-24
The Early Days of a Better Nation [ 11-Sep-24 2:35pm ]
Carol's funeral [ 11-Sep-24 2:35pm ]


More people came to Carol's funeral than there were seats in the crematorium chapel: our families, her friends and mine, some of whom had travelled a long way. The funeral directors, P B Wright and Sons, took care of the arrangements kindly and professionally. Catriona Miller, the humanist celebrant, conducted the service and delivered a warm and accurate tribute to Carol. I spoke about Carol's life with me, and Michael spoke for himself and Sharon about Carol as a mother. Two hymns were sung that had also been sung at our wedding. The closing music was a song Carol had played countless times: 'Stars' by Simply Red.

The Order of Service booklet featured a fine recent photograph of Carol by Michael, and some of Carol's own photographs of Gourock's sunset skies.

The collection was for two charities that Carol had actively supported: the RNLI, and Medical Aid for Palestinians. It raised £1138.50, which yesterday I rounded up to £1200 and divided evenly into two donations in her memory.

Many, many thanks to all who attended, and to all who contributed so generously, at the collection and online. Thanks also for the many messages and cards of sympathy, for which I and all the family are deeply grateful.
28-Aug-24
# [ 28-Aug-24 9:00am ]


Carol Ann MacLeod, 11 February 1952 to 16 August 2024

Carol, my beloved wife whom I met in 1979 and married in 1981, died on Friday 16 August.

She was the centre of my world, and she's gone.

There will be a funeral service at Greenock Crematorium, on Monday 2 September, at 2 pm, to which all family and friends are invited. Family flowers only please. There will be a retiral collection in aid of Carol's favourite charities. #
24-Aug-24
Two-Step Forwards, Ten Years Back [ 12-Aug-24 8:07pm ]

I know, I know. Two pimpage posts in a row. Not my usual shtick, and I assure you not any kind of new normal; the stars just aligned that way this time around. For what it's worth, next time I expect to be talking about Darwinian evolution in digital ecosystems, complete with a tortured retcon arguing that I saw it all coming two decades ago with Maelstrom.

You know. The classics.

Forward One:

Artist, children's author, musician, video maestro—not to mention good friend and RacketNX nemesis—Steven Archer is at it again. I've sung his praises before on this 'crawl, even written a story based on one of his songs. I'm not the only one to appreciate the man's work, even though darkwave grunge is about as far as you can get from my usual proggy aesthetic; he's worked with entities as diverse as NASA and Alan Parsons. Neil Gaiman lauded his skills while Steven was still a student (granted, that endorsement has not aged as well as he might have hoped).

This time around he's released a jagged graphic novel—a companion piece to Stoneburner's Apex Predator album, though by no means do you have to experience one to appreciate the other— about canine deities who generally exist outside time and space but who, here in what we call reality, still crush cities underfoot like any self-respecting kaiju when they get pissed. Unlike last post's Alevtina and Tamara, there's no doubt that Tooth and Claw is a proper graphic novel. Its got a definite and coherent and very long story arc: it starts at the beginning of time (it's a creation myth at heart) between "waves of energy so far apart you cannot call them heat", and it ends in pretty much the same place. (Well, technically it ends with Nicholas Cage starring in a Ridley Scott movie about a giant wolf laying waste to the United States, but that's just part of the epilogue).

The art ranges all over the place, from saturated oils that bleed across the page to joyful childlike scribbles to even that A-word nobody uses any more for fear of provoking backlash. The verbiage, as usual, is a delight—"every species learns by breaking the things around them", "there goes God, making the scientists look stupid again", "I am what is left after the stars go out". Vignettes unfold in singularities and coffee shops and frozen steppes and burning cities. The vibe ranges from Crichton to Call of Duty to Indigenous Creation Myth by way of Lee Smolin. Conspiracy theorists rage on the Internet. A girl on her sixth birthday reenacts Armageddon with her stuffed animals. Saturn's Rings turn out to be the skid mark of an ancient deity slingshotting en route to earth. Soldiers just follow orders; scientists try to figure out how something the size of a mountain gets enough to eat. It's really good.

I wrote the Forward. That's pretty good too.

You can see the excerpts on this page. View the art, read the captions: a small taste, nothing more.

If you fancy a whole meal, here's where you get it.

Forward Two

"Will the explorers manage to escape the crazed thing and fight their way back to the Endurance?" asks the back-jacket text for the batshit novella Poiesis, then goes on to answer itself:

"Probably, because this is Episode One of a saga.
"But hey, you never know."

Which gives you a sense of the attitude that indie coauthors Valentina Kay and Daniele Bonfanti bring to their new venture: a splatterspace epic of indeterminate length, released one standalone chapter at a time, like some unholy love child of— well, in the Forward (yeah, I wrote one for this too) I describe it as something you might get if Sam Peckinpah and Quentin Tarantino collaborated on an episode of Doctor Who with Douglas Adams acting as creative consultant. I suppose that's as good a description as any. Poiesis plays with some very big ideas (a title like that, how could it not?) but it doesn't take them—or itself—too seriously. One of the series' protagonists is a superintelligent swarm of bees who's romantically involved with a sapient plant (the whole pollination thing, you understand). The good ship Endurance's military muscle consists of a couple of cheerful jarheads to whom getting a limb blown off is all in a day's work, and whose considerable arsenals include a gun that fires weaponized superbacteria the size of cocktail wienies. They all tool around the cosmos in a ship that, on the inside at least, looks like a seaside Mediterranean village, and they live in a reality formed by "the cognitions in the mind of the Universe actualized in perceptions"— which might have a familiar ring to anyone who's encountered the work of Bernard Kastrup. (In fact, this whole dripping-viscera-laden first chapter revolves around questions of AI and consciousness in a way that suggests (to me, anyway) that Kay and Bonfante also have a passing familiarity with Penrose and Hameroff's Orch OR hypothesis.)

The entire epic goes by the title Symbiosis. Only the first installment is out, so I don't know where it'll end up. But I do like the way it begins.

Ten Years Gone

I've done a fair number of podcasts over the years. Hell, I've done eight or nine in just the past year, two of those with Tales from the Bridge— at whom I seem to have become a semiregular, and whose crew got me onto a panel at Toronto's FanExpo back in 2021 (an event to which, curiously, I have never been invited back. This might have something to do with my gleeful endorsement of a video clip I played off the top, in which one character addresses the self-righteous environmentalism of another by asking why she'd had a child if she cared about the environment so much, itemizing the enormous impacts that first-world reproduction inflicts on ol' Earth, and offering to slit her sprog's throat to help redress the imbalance. The young parents sitting in the front row with their toddler stormed out before I'd even reached the good part.)

I don't usually pimp such appearances— partly because that's the podcasters job, and partly because I don't want to be one of those people forever thumping their tubs about every minor appearance as though it were somehow on a par with discovering life on Enceladus. But I seem to be on a roll here anyway, and this latest release from the Bridgers—just a few weeks old—isn't so much an interview between podcasters and their guests as it is a long-overdue catch up between a couple of buddies who haven't seen each other in over a decade.

Richard Morgan and I were exchanging emails as colleagues and mutual fans for a couple of years before the people at Crytek put us in competition with each other for the Crysis 2 gig. That was when we first met in the flesh, over in Germany—and where we reunited a couple of years later, to work on another game that never made it onto the market. (That was probably just as well, actually. Certain aspects of that project encouraged a sort of blurring of game and reality in a way that might have provoked, ahem, unfortunate behaviors among those with an infirm grip on the latter.)

We hit it off. We were a perfect fit for that whole arguing-ideas-over-beers thing that I've missed so much since I left academia. The man also proved his worth when he waded into the fray over the Requires Hate debacle—a battle from which any number of self-proclaimed "friends" slunk away, tails between legs, muttering something about not wanting to antagonize the Twitter crowd. Richard didn't care about any of that shit. He called it as he saw it.

But like I say, that was over ten years ago. Barring the occasional email, we haven't been in touch since—until Tales from the Bridge got us together to reminisce about the old days. They probably got more than they bargained for; at least, they got more than what they could fit into one podcast. So what I'm pimping here is only Part One. (Last-minute update: shit, Part Two's out there now as well. Damn. I gotta pay more attention to deadlines.)

Honestly, I don't know how good it is. I don't know how interesting you'll find it. I kind of stopped thinking in those terms at the first Duuude! It was beers and ideas, albeit without the beers. It was two old friends catching up.

Arthur Jafa once pointed out that Eric Clapton's "Layla" was not written for Clapton fans; it was for Patti Boyd[1]. Others were welcome to listen in, though[2]. Maybe this conversation—in a much smaller, much-less-influential way— is something like that.

I, for one, had a blast.


  1. https://en.wikipedia.org/wiki/Layla
  2. By way of context, AJ was drawing parallels to his own art: he's in conversation with American Black culture, he's not talking to us white folks. But he doesn't mind if we eavesdrop.
05-Aug-24
# [ 05-Aug-24 3:38pm ]


My Glasgow Worldcon Schedule



As some of you may know, I'm a Guest of Honour at the Glasgow Worldcon. I haven't said enough about that here, I know. I'm well chuffed about it, needless to say. Here are time/places where you can be sure to find me.

Autographing: Ken MacLeod, Thursday 8 August 2024, 13:00 GMT+1, Hall 4 (Autographs)

Opening Ceremony, Thursday 8 August 2024, 16:00 GMT+1, Clyde Auditorium

Morrow's Isle - Opera, Thursday 8 August 2024, 20:00 GMT+1, Clyde Auditorium

Iain Banks: Between Genre and the Mainstream, Friday 9 August 2024, 11:30 GMT+1, Alsh 1

Luna Press Book Launch Party, Friday 9 August 2024, 13:00 GMT+1, Argyll 2

Guest of Honour Interview: Ken MacLeod, Friday 9 August 2024, 16:00 GMT+1, Lomond Auditorium

Table Talk: Ken MacLeod, Saturday 10 August 2024, 11:30 GMT+1, Hall 4 (Table Talks)

The Making of Morrow's Isle - An Opera, Saturday 10 August 2024, 14:30 GMT+1, Argyll 2

NewCon Press Book Launch, Saturday 10 August 2024, 16:00 GMT+1, Argyll 3

The Politics of Modern Scottish SF, Saturday 10 August 2024, 20:30 GMT+1, Castle 1

Reading: Ken MacLeod, Sunday 11 August 2024, 10:00 GMT+1, Castle 2

Autographing: Ken MacLeod, Sunday 11 August 2024, 11:30 GMT+1, Hall 4 (Autographs)

An Ambiguous Utopia: 50 Years of Ursula K. Le Guin's The Dispossessed, Sunday 11 August 2024, 14:30 GMT+1, Meeting Academy M1

2024 Hugo Awards Ceremony, Sunday 11 August 2024, 20:00 GMT+1, Clyde Auditorium

Stroll with the Stars - Monday, Festival Park, Monday 12 August 2024, 09:00 GMT+1, Outside Crowne Plaza

Writing Future Scotland, Monday 12 August 2024, 13:00 GMT+1, Lomond Auditorium

#
04-Aug-24
The New Aesthetic [ 8-Jul-24 8:43am ]

Interference patterns from LED street lighting creates pixelated shadows.

Rudy's Blog [ 22-Jul-24 12:43am ]
England with Barb [ 22-Jul-24 12:43am ]

Early in 2024 I started spending time with Barb Ash, who I met in the Los Gatos Coffee Roasting. Being with Barb makes me a lot happier than I’ve been for the last year and a half. I’m glad I met her. In May, we went ahead and did a trip to England together, spending […]

The post England with Barb first appeared on Rudy's Blog.

Podcast #115. "Big Germs" [ 10-Jul-24 8:34pm ]

June 23, 2024. Reading my anti-gun "Big Germs" story at the SF in SF meeting. In memory of Terry Bisson. And huge thanks to sound wizsard Rusty Hodge. Press the arrow below to play "Big Germs.” We’re using the new and improved .m4a sound file format instead of the old .mp3. If you have a […]

The post Podcast #115. "Big Germs" first appeared on Rudy's Blog.

(As usual, click on any of the following images to embiggen. Although I really shouldn’t have to be telling anyone that.)

You might have seen a dude by the name of Dimitry SkoLzki hanging around the gallery hereabouts. He did these distinctive black-and-white sketches—they have an almost almost wood-cut vibe—inspired by characters and events in Blindsight (and later, Echopraxia). They impressed my Chinese publishers so much they bought the rights for their own Blindopraxia imprints.

Russian by birth, currently based in Cyprus (he got the hell out of Dodge just before Putin went full-on Goon Squad), SkoLzki has recently put out a heartfelt volume of—I'm not quite sure how to describe them, exactly. Certain cultural outlets are calling it a collection of "noir fairy tales"; others call it a graphic novel. Neither description is wrong, exactly, but neither really captures the essence of this surrealistic, horrific, old-time Russian Grimm-tales volume. Limasol Today says that it's about "the author exploring his inner worlds and the depths of his personality through metaphors and symbols." Dmitriy tells me that it was forged against a backdrop of "longterm depression" (to which I can only say, Dude, if you could pull off something like this when you're depressed, I can barely imagine what you could come up with when you're ecstatic). The phrase "mystical noir illustrated tales" has been bandied about.

What we're looking at is an interlocking series of nearly a hundred vignettes clumped together into five larger—I'm going to call them dream tales—across 227 pages. The balance between word and picture varies from leaf to leaf. Sometimes a mural sprawls across two facing pages with barely three lines of text to accompany it; other times a whole page of cramped handwriting has to stand on its own without so much as a stick figure for support. The balance between light and dark is a lot more consistent: darkness always wins, even in the happy bits.

The volume is titled Alevtina and Tamara, but— while those two sisters are omnipresent throughout— the stories really orbit around their brother Lyonka. Lyonka dies pretty much out of the gate (he's a sickly child who likes beetles) but quickly resurrects and lives out the rest of the book as some kind of hyperphallic goat-human hybrid. A mother figure the size of a mountain breezes through the woods now and then; predators and prey are always prowling around at the edges of the page. Quests trivial and epic get wrapped up in a few lines of free verse, or abandoned halfway through when someone gets distracted by something shiny. A lot of time is spent in trees. It's a weird, dark, disjointed, strangely innocent celebration of the macabre, East-European-mythic right down to the marrow, and I loved it.

The whole thing is Dream Logic made flesh. I bet Lynch would be a fan. I bet Cronenberg would too.



This image has an empty alt attribute; its file name is 20240725_152251-3-768x1024.jpgThis image has an empty alt attribute; its file name is 20240725_152126-1-755x1024.jpg

But I may be way too late sending out invitations to this party; Alevtina and Tamara came out back in April. A little like Lyonka and his sisters, I too have been pulled this way and that by various deadlines, ambitions, and emergencies (I'm still a bit soggy after slurping the pond out of our basement during the recent flooding—Climate Change finally hits the Magic Bungalow).

It's not that SkoLzki's work is the kind that staledates, mind you. I might even call it "timeless", if I wasn't so afraid of descending into cliché. But SkoLzki's only released this thing in a limited edition of 300 beautifully embossed hardcovers; if they're not sold out now, they might be soon.

So if the renditions you're looking at here call out to you, head over to the store and check the inventory. Alevtina and Tamara isn't for everybody, but the people it is for really don't want to miss out on this.

25-Jul-24

—Our Father, Who Art in…

But father‘s out of style, isn’t it? They call you The Admin now. The Board. Creation was a group project. I don’t know how they know that, but apparently there are a lot of you. Maybe I should call you Odin, or Thor. Or—Loki, given the way things are falling apart down here.

I always thought of you as the Heavenly Father, and all this time I’ve been praying to a committee.

This should feel different. It’s not faith any more, after all. It’s science. It’s, it’s evidence-based as they like to say. Before, I could just talk to you; it never occurred to me to wonder how you’d hear my small voice out of all these billions. I think maybe part of me was hoping you wouldn’t.

But now there’s a mechanism. We’re all just numbers now, that’s all we’ve ever been. These thoughts in my head are just math and state variables and logic gates. If that’s true — and what kind of flat-earther would deny it, after five years of merciless confirmation? — then the model can be frozen between one tick of the system clock and the next. The whole universe could have stopped dead a split-second ago, you could have poked around to your heart’s content. In the space between one breath and the next, you could have read every thought sparking in every creature in the universe.

Maybe you plot my soul on some kind of graph, shame on one axis, remorse on another. Maybe you can see every x about to tip over into y. You know what my next thought is going to be, and the thought after. Math is deterministic, after all. And when you’ve seen enough, you can just — start the cosmos up again and wander off for a coffee, and you don’t even have to hang around to hear the rest of this prayer because you already know how it’s going to end.

I don’t care what they call it these days: Handshaking Protocols, NPCUI, OSping, Divinity dialup. It’s still just prayer. I know that, because it feels the same as it always did. Even if nothing else does.

It feels like no one’s listening.

05-Jul-24
The New Aesthetic [ 22-Apr-24 11:15am ]

British architecture studio Dowen Farmer Architects has released plans for Portal Road, a multistorey ghost-kitchen tower block in west London that would shuttle food to a public food hall like the “Ministry of Magic”.

Dowen Farmer Architects’ proposal for the site, called Portal Road, would measure 28,000 square metres and consist of 12 storeys, 10 of which would be dedicated to 260 rentable ghost kitchens for use by local shops and restaurants.

Dowen Farmer designs cubic ten-storey tower for ghost kitchens | DEZEEN

Lee Sedol, reflecting on AlphaGo, eight years on.

the hauntological society [ 17-May-24 4:13pm ]

Dispatch the maimed, the old, the weak, destroy the very world itself, for what is the point of life if the promise of fulfilment lies elsewhere?

On the windswept coast of rural Suffolk, a deranged scientist attempts to extract the essence of life itself.

There are very audible echoes of Peter Newbrook's The Asphyx and even Peter Sasdy's The Stone Tape, scripted by Nigel Kneale in The Breakthrough, though it's actually based on a Daphne Du Maurier short story that predates both of them. Written in 1964 as a favour to Kingsley Amis who was looking to put together an anthology of science fiction stories that was never published, the short story turned up in Du Maurier's 1971 collection Not After Midnight, and Other Stories. Graham Evans' television play, adapted by Clive Exton, is faithful to the story but possibly as a consequence it's far too leisurely for its own good.

Computer specialist Stephen Saunders (Simon Ward) is sent by a government minister, Sir John Fowler (Anthony Nicholls), to a laboratory, Saxmere, situated on the salt marshes of the East Suffolk Coast, ostensibly to help maintain it's fantastically clunky and oh-so-70s computer - all oscilloscopes, huge reels of tape and inexplicable banks of flashing lights. It turns out that the Saxmere team, made up of "Mac" Maclean (Brewster Mason), Robbie (Clive Swift), Janus (Roy Boyd), Ken (Thomas Ellice, here credited as Martin C. Thurley) and Cerberus the dog are trying to use the computer to help them capture human psychic energy (or "Force 6" as Mac dubs it) at the very moment of death. The terminally ill Ken is chosen as the test subject and a developmentally challenged young local girl, Niki (Rosalind McCabe), who shows some talent as a medium and seems to have caused poltergeist activity in the past, is drafted in to act as a conduit to him after he dies. The experiment seems to be a success but Niki reports back that Ken wants his life force released and the researchers realise with horror that their process captures more than just psychic energy.

— Kevin Lyons — EOFFTV - The Encyclopedia of Fantastic Film and Television

Rudy's Blog [ 23-Jun-24 12:11am ]

This is the text of "Big Germs," a revised and abridged story that I am reading at the SF in SF gathering in San Francisco at 6 pm on Sunday, June 23, 2024. This post appears on Medium as well. A longer and unrivised version of "Big Germs" appeared in BoingBoing on May 22, 2024. […]

The post Reading "Big Germs" at SF in SF. Print Sale. first appeared on Rudy's Blog.

Alien invasions and interspecies war are played-out tropes. In my Sqinks novel, sqink aliens are arriving. And I was going to have it be an alien invasion. But then I thought to ask Stephen Wolfram for a better idea. Me: What do the invading aliens want from us? Stephen Anthropology? Understand an alien mind to […]

The post Invading Aliens? No, They're Exchange Students. (With Input from Stephen Wolfram) first appeared on Rudy's Blog.

In Memory of Terry Bisson [ 30-Mar-24 8:53pm ]

On March 30, 2024, I spoke at an event sponsored by City Lights books, in honor of Terry Bisson.  Here’s my three-frame pan of the crowd in front of me. And here’s a version of what I said. Terry Bisson 1942-2024 Memorial event at the Lost Church in SF I met Terry in 1984.  We […]

The post In Memory of Terry Bisson first appeared on Rudy's Blog.

Over the years I’ve written two non-fiction books on the fourth dimension, edited a book of C. H. Hinton’s writings on the fourth dimension, published a novel set in the fourth dimension, and worked the concept into a number of my other novels and short stories. Shortly before Christmas, 2023, Jeff Carreira interviewed me about […]

The post The Reality of the Fourth Dimension first appeared on Rudy's Blog.

A List Apart: The Full Feed [ 30-May-24 7:04pm ]
User Research Is Storytelling [ 30-May-24 7:04pm ]

Ever since I was a boy, I've been fascinated with movies. I loved the characters and the excitement—but most of all the stories. I wanted to be an actor. And I believed that I'd get to do the things that Indiana Jones did and go on exciting adventures. I even dreamed up ideas for movies that my friends and I could make and star in. But they never went any further. I did, however, end up working in user experience (UX). Now, I realize that there's an element of theater to UX—I hadn't really considered it before, but user research is storytelling. And to get the most out of user research, you need to tell a good story where you bring stakeholders—the product team and decision makers—along and get them interested in learning more.

Think of your favorite movie. More than likely it follows a three-act structure that's commonly seen in storytelling: the setup, the conflict, and the resolution. The first act shows what exists today, and it helps you get to know the characters and the challenges and problems that they face. Act two introduces the conflict, where the action is. Here, problems grow or get worse. And the third and final act is the resolution. This is where the issues are resolved and the characters learn and change. I believe that this structure is also a great way to think about user research, and I think that it can be especially helpful in explaining user research to others.

A detailed graph that shows the narrative structure of The Godfather and The Dark Knight across three acts. The graph is divided into segments labeled Three-act structure in movies (© 2024 StudioBinder. Image used with permission from StudioBinder.). Use storytelling as a structure to do research

It's sad to say, but many have come to see research as being expendable. If budgets or timelines are tight, research tends to be one of the first things to go. Instead of investing in research, some product managers rely on designers or—worse—their own opinion to make the "right" choices for users based on their experience or accepted best practices. That may get teams some of the way, but that approach can so easily miss out on solving users' real problems. To remain user-centered, this is something we should avoid. User research elevates design. It keeps it on track, pointing to problems and opportunities. Being aware of the issues with your product and reacting to them can help you stay ahead of your competitors.

In the three-act structure, each act corresponds to a part of the process, and each part is critical to telling the whole story. Let's look at the different acts and how they align with user research.

Act one: setup

The setup is all about understanding the background, and that's where foundational research comes in. Foundational research (also called generative, discovery, or initial research) helps you understand users and identify their problems. You're learning about what exists today, the challenges users have, and how the challenges affect them—just like in the movies. To do foundational research, you can conduct contextual inquiries or diary studies (or both!), which can help you start to identify problems as well as opportunities. It doesn't need to be a huge investment in time or money.

Erika Hall writes about minimum viable ethnography, which can be as simple as spending 15 minutes with a user and asking them one thing: "'Walk me through your day yesterday.' That's it. Present that one request. Shut up and listen to them for 15 minutes. Do your damndest to keep yourself and your interests out of it. Bam, you're doing ethnography." According to Hall, "[This] will probably prove quite illuminating. In the highly unlikely case that you didn't learn anything new or useful, carry on with enhanced confidence in your direction."  

This makes total sense to me. And I love that this makes user research so accessible. You don't need to prepare a lot of documentation; you can just recruit participants and do it! This can yield a wealth of information about your users, and it'll help you better understand them and what's going on in their lives. That's really what act one is all about: understanding where users are coming from. 

Jared Spool talks about the importance of foundational research and how it should form the bulk of your research. If you can draw from any additional user data that you can get your hands on, such as surveys or analytics, that can supplement what you've heard in the foundational studies or even point to areas that need further investigation. Together, all this data paints a clearer picture of the state of things and all its shortcomings. And that's the beginning of a compelling story. It's the point in the plot where you realize that the main characters—or the users in this case—are facing challenges that they need to overcome. Like in the movies, this is where you start to build empathy for the characters and root for them to succeed. And hopefully stakeholders are now doing the same. Their sympathy may be with their business, which could be losing money because users can't complete certain tasks. Or maybe they do empathize with users' struggles. Either way, act one is your initial hook to get the stakeholders interested and invested.

Once stakeholders begin to understand the value of foundational research, that can open doors to more opportunities that involve users in the decision-making process. And that can guide product teams toward being more user-centered. This benefits everyone—users, the product, and stakeholders. It's like winning an Oscar in movie terms—it often leads to your product being well received and successful. And this can be an incentive for stakeholders to repeat this process with other products. Storytelling is the key to this process, and knowing how to tell a good story is the only way to get stakeholders to really care about doing more research. 

This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues.

Act two: conflict

Act two is all about digging deeper into the problems that you identified in act one. This usually involves directional research, such as usability tests, where you assess a potential solution (such as a design) to see whether it addresses the issues that you found. The issues could include unmet needs or problems with a flow or process that's tripping users up. Like act two in a movie, more issues will crop up along the way. It's here that you learn more about the characters as they grow and develop through this act. 

Usability tests should typically include around five participants according to Jakob Nielsen, who found that that number of users can usually identify most of the problems: "As you add more and more users, you learn less and less because you will keep seeing the same things again and again… After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new." 

There are parallels with storytelling here too; if you try to tell a story with too many characters, the plot may get lost. Having fewer participants means that each user's struggles will be more memorable and easier to relay to other stakeholders when talking about the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place.

Researchers have run usability tests in person for decades, but you can also conduct usability tests remotely using tools like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You can think of in-person usability tests like going to a play and remote sessions as more like watching a movie. There are advantages and disadvantages to each. In-person usability research is a much richer experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time reactions—including surprise, agreement, disagreement, and discussions about what they're seeing. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actors' interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up.

If in-person usability testing is like watching a play—staged and controlled—then conducting usability testing in the field is like immersive theater where any two sessions might be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can go out to meet users at their location to do your research. With either option, you get to see how things work in context, things come up that wouldn't have in a lab environment—and conversion can shift in entirely different directions. As researchers, you have less control over how these sessions go, but this can sometimes help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests provide another level of detail that's often missing from remote usability tests. 

That's not to say that the "movies"—remote sessions—aren't a good option. Remote sessions can reach a wider audience. They allow a lot more stakeholders to be involved in the research and to see what's going on. And they open the doors to a much wider geographical pool of users. But with any remote session there is the potential of time wasted if participants can't log in or get their microphone working. 

The benefit of usability testing, whether remote or in person, is that you get to see real users interact with the designs in real time, and you can ask them questions to understand their thought processes and grasp of the solution. This can help you not only identify problems but also glean why they're problems in the first place. Furthermore, you can test hypotheses and gauge whether your thinking is correct. By the end of the sessions, you'll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is the heart of the story—where the excitement is—but there can be surprises too. This is equally true of usability tests. Often, participants will say unexpected things, which change the way that you look at things—and these twists in the story can move things in new directions. 

Unfortunately, user research is sometimes seen as expendable. And too often usability testing is the only research process that some stakeholders think that they ever need. In fact, if the designs that you're evaluating in the usability test aren't grounded in a solid understanding of your users (foundational research), there's not much to be gained by doing usability testing in the first place. That's because you're narrowing the focus of what you're getting feedback on, without understanding the users' needs. As a result, there's no way of knowing whether the designs might solve a problem that users have. It's only feedback on a particular design in the context of a usability test.  

On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you won't know whether the thing that you're building will actually solve that. This illustrates the importance of doing both foundational and directional research. 

In act two, stakeholders will—hopefully—get to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can help motivate stakeholders to address the issues that come up.

Act three: resolution

While the first two acts are about understanding the background and the tensions that can propel stakeholders into action, the third part is about resolving the problems from the first two acts. While it's important to have an audience for the first two acts, it's crucial that they stick around for the final act. That means the whole product team, including developers, UX practitioners, business analysts, delivery managers, product managers, and any other stakeholders that have a say in the next steps. It allows the whole team to hear users' feedback together, ask questions, and discuss what's possible within the project's constraints. And it lets the UX research and design teams clarify, suggest alternatives, or give more context behind their decisions. So you can get everyone on the same page and get agreement on the way forward.

This act is mostly told in voiceover with some audience participation. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They give the stakeholders their recommendations and their guidance on creating this vision.

Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. "The most effective presenters use the same techniques as great storytellers: By reminding people of the status quo and then revealing the path to a better way, they set up a conflict that needs to be resolved," writes Duarte. "That tension helps them persuade the audience to adopt a new mindset or behave differently."

Picture this. You've joined a squad at your company that's designing new product features with an emphasis on automation or AI. Or your company has just implemented a personalization engine. Either way, you're designing with data. Now what? When it comes to designing for personalization, there are many cautionary tales, no overnight successes, and few guides for the perplexed. 

Between the fantasy of getting it right and the fear of it going wrong—like when we encounter "persofails" in the vein of a company repeatedly imploring everyday consumers to buy additional toilet seats—the personalization gap is real. It's an especially confounding place to be a digital professional without a map, a compass, or a plan.

For those of you venturing into personalization, there's no Lonely Planet and few tour guides because effective personalization is so specific to each organization's talent, technology, and market position. 

But you can ensure that your team has packed its bags sensibly.

A sign at a mountain scene says Designing for personalization makes for strange bedfellows. A savvy art-installation satire on the challenges of humane design in the era of the algorithm. Credit: Signs of the Times, Scott Kelly and Ben Polkinghome.

There's a DIY formula to increase your chances for success. At minimum, you'll defuse your boss's irrational exuberance. Before the party you'll need to effectively prepare.

We call it prepersonalization.

Behind the music

Consider Spotify's DJ feature, which debuted this past year.

https://www.youtube.com/watch?v=ok-aNnc0Dko

We're used to seeing the polished final result of a personalization feature. Before the year-end award, the making-of backstory, or the behind-the-scenes victory lap, a personalized feature had to be conceived, budgeted, and prioritized. Before any personalization feature goes live in your product or service, it lives amid a backlog of worthy ideas for expressing customer experiences more dynamically.

So how do you know where to place your personalization bets? How do you design consistent interactions that won't trip up users or—worse—breed mistrust? We've found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count.

​From Big Tech to fledgling startups, we've seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a program's ultimate track record—and its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology efforts—turns on how effectively these prepersonalization activities play out.

Time and again, we've seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process.

A personalization practice involves a multiyear effort of testing and feature development. It's not a switch-flip moment in your tech stack. It's best managed as a backlog that often evolves through three steps: 

  1. customer experience optimization (CXO, also known as A/B testing or experimentation)
  2. always-on automations (whether rules-based or machine-generated)
  3. mature features or standalone product development (such as Spotify's DJ experience)

This is why we created our progressive personalization framework and why we're field-testing an accompanying deck of cards: we believe that there's a base grammar, a set of "nouns and verbs" that your organization can use to design experiences that are customized, personalized, or automated. You won't need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical.

Set your kitchen timer

How long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can (and often do) span weeks. For the core workshop, we recommend aiming for two to three days. Here's a summary of our broader approach along with details on the essential first-day activities.

The full arc of the wider workshop is threefold:

  1. Kickstart: This sets the terms of engagement as you focus on the opportunity as well as the readiness and drive of your team and your leadership. .
  2. Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work.
  3. Work your plan: This phase is all about creating a competitive environment for team participants to individually pitch their own pilots that each contain a proof-of-concept project, its business case, and its operating model.

Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases.

Kickstart: Whet your appetite

We call the first lesson the "landscape of connected experience." It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform.

Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions (such as onboarding sequences or wizards), notifications, and recommenders. We have a catalog of these in the cards. Here's a list of 142 different interactions to jog your thinking.

This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, here's a long-form primer and a strategic framework.

Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature (or something similar). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future.

Next, have your team plot each idea on the following 2×2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. It's also a reminder (which is why we used the word argument earlier) of the broader effort beyond these tactical interventions.

A two-by-two grid shows the four areas of emphasis for a personalization program in an organization: Business efficiency, customer experience, business orchestration, and customer understanding. The focus varies from front-stage to back-stage and from business-focused to customer-focused outcomes.Getting intentional about the desired outcomes is an important component to a large-scale personalization program. Credit: Bucket Studio.

Each team member should vote on where they see your product or service putting its emphasis. Naturally, you can't prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas.

The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? (We're pretty sure that you do: it's just a matter of recognizing the relative size of that need and its remedy.) In our cards, we've noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress.

Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers.

Barriers to personalization according to a Boston Consulting Group 2016 research study. The top items include The largest management consultancies have established practice areas in personalization, and they regularly research program risks and challenges. Credit: Boston Consulting Group.

At this point, you've hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Good—you're ready to continue.

Hit that test kitchen

Next, let's look at what you'll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when you're configuring a connected experience?

What's important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project (as one of our client executives memorably put it). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization program's regularly evolving menu.

The Progressive Personalization Model v2: A pyramid with the following layers, starting at the base and working up: Raw Data (millions), Actionable Data (hundreds of thousands), Segments (thousands), Customer Experience Patterns (many), Interactions (dozens), and Goals (handful).Progressive personalization, a framework for designing connected experiences. Credit: Bucket Studio and Colin Eagan.

The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating "dishes" is the way that you'll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others.

The dishes will come from recipes, and those recipes have set ingredients.

A photo of the Progressive Personalization deck of cards with accompanying text reading: Align on key terms and tactics. Draft and groom a full backlog, designing with data. A zoomed out view of many of the cards in the deck. Cards have colors corresponding to the layers of the personalization pyramid and include actionable details. Progressive personalization is a model of designing for personalized interactions that uses playing cards to assemble the typical parts for such features and functionality.In the same way that ingredients form a recipe, you can also create cards to break down a personalized interaction into its constituent parts. Credit: Bucket Studio and Colin Eagan. Verify your ingredients

Like a good product manager, you'll make sure—andyou'll validate with the right stakeholders present—that you have all the ingredients on hand to cook up your desired interaction (or that you can work out what needs to be added to your pantry). These ingredients include the audience that you're targeting, content and design elements, the context for the interaction, and your measure for how it'll come together. 

This isn't just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team: 

  1. compare findings toward a unified approach for developing features, not unlike when artists paint with the same palette; 
  2. specify a consistent set of interactions that users find uniform or familiar; 
  3. and develop parity across performance measurements and key performance indicators too. 

This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience.

Compose your recipe

What ingredients are important to you? Think of a who-what-when-why construct

  • Who are your key audience segments or groups?
  • What kind of content will you give them, in what design elements, and under what circumstances?
  • And for which business and user benefits?

We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.

Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below. 

  1. Nurture personalization: When a guest or an unknown visitor interacts with  a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time.
  2. Welcome automation: When there's a newly registered user, an email is generated to call out the breadth of the content catalog and to make them a happier subscriber.
  3. Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew.
A selection of prompt cards assembled to represent the key parameters of a A "nurture" automation may trigger a banner or alert box that promotes content that makes it easier for users to complete a common task, based on behavioral profiling of two user types. Credit: Bucket Studio. A selection of prompt cards assembled to represent the key parameters of a A "welcome" automation may be triggered for any user that sends an email to help familiarize them with the breadth of a content library, and this email ideally helps them consider selecting various titles (no matter how much time they devote to reviewing the email's content itself). Credit: Bucket Studio. A selection of prompt cards assembled to represent the key parameters of a A "winback" automation may be triggered for a specific group, such as users with recently failed credit-card transactions or users at risk of churning out of active usage, that present them with a specific offer to mitigate near-future inactivity. Credit: Bucket Studio.

A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although we've also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards.

You can think of the later stages of the workshop as moving from recipes toward a cookbook in focus—like a more nuanced customer-journey mapping. Individual "cooks" will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production.

Better kitchens require better architecture

Simplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said,  "Complicated problems can be hard to solve, but they are addressable with rules and recipes."

When personalization becomes a laugh line, it's because a team is overfitting: they aren't designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AI's output quality, for example, is indeed limited by your IA. Spotify's poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture.

You can definitely stand the heat…

Personalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed.

This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer isn't an overnight affair. But if you use the same cookbook and shared recipes, you'll have solid footing for success. We designed these activities to make your organization's needs concrete and clear, long before the hazards pile up.

While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Don't squander it. The proof, as they say, is in the pudding.

23-Jun-24
Joi Ito's Web [ 22-Jun-24 6:13am ]

I was born in Kyoto and Kyoto is one of my favorite cities. It's rich with culture and nuance. One of the hardest things for non-Kyoto people to navigate is the many layers of politeness. Everyone smiles at you and treats you very nicely. However, it's quite dangerous to take everything at face value. The people of Kyoto often tell you what they want you to do, veiled in a nice-sounding statement or request, which is hard for non-Kyoto people to understand. Sometimes, if you take the comment or offer at face value, you will be shunned without even knowing it.

I saw some wonderful stickers on Twitter that show what Kyoto people might say, "tatemae," and the reverse side that shows what they really mean, "honne."

I asked permission to translate the stickers so non-Japanese people could understand them. However, upon translating them, I realized that the politeness of the "tatemae" and the rudeness of the "honne" doesn't really come through in English, but I think you'll get the gist. And Rie's face says it all.

Enjoy.

Ikezu stickers of Kyoto people with a hidden side

The following text was translated from the original post in Japanese.

ikezu_KV_v2_light.jpg

If someone says, "Do you want some pickles?" to you in Kyoto, it means "hurry up and leave."

This kind of high-context communication that most people would never notice is called "ikezu."

This "ikezu culture" has long been recognized as uniquely Kyoto, but we have noticed that it has not yet been converted into a tourism resource.

Just as Osaka has turned the prefectural stereotype of comedians into a tourism resource, Kyoto's "ikezu" should also become a tourism resource.

With this in mind, we created a new souvenir of Kyoto, the "Kyoto-people-with-hidden-meaning ikezu sticker."

This product takes advantage of the characteristic of "ikezu" to "convey in a roundabout way what is difficult to say". It allows Kyoto people to convey their real feelings to those outside of Kyoto.

As the name suggests, this product has a double-sided structure. The front side depicts a polite but somewhat mean-spirited "ikezu" front, and the back side reveals the hidden true feelings of the Kyoto people.

The front of the card shows the "tatemae" which is the polite roundabout words.

The back side has "honne" which is what the words actually mean.

Product Lineup
ikezu_use_v3-2048x846.jpg

We have created four types of products for each "hard-to-express" requests that occur in various situations at home so that people from outside of Kyoto can take them home and use them.

Toilet section: When you want to tell someone, "Please don't pee standing up."

ikezu_toilet_image_01.jpg
Front text: "My toilet seat may not be the most comfortable, but if you don't mind, please try it."

ikezu_toilet_image_03.jpg
Back text: "Don't do it standing up, okay?"

Entrance: When you want to tell someone, "Please don't come to my house in dirty clothes."

ikezu_entrance_image_01.jpg
Front text: "How nice to see you. Did you go to Lake Biwa?"

ikezu_entrance_image_03.jpg
Back text: "You come here looking dirty! Go wash everything in the Kamogawa River."

Dining table version: When you want to tell someone, "Please don't make sounds while you eat."

ikezu_dining_image_v2_01.jpg
Front text: "You know what? It's okay to eat buckwheat noodles with a slurping sound."

ikezu_dining_image_v2_03.jpg
Back text: "Kucha kucha kucha you're noisy!"

Post section: When you want to tell people, "Please don't put unnecessary flyers in the mailbox."

ikezu_post_image_02.jpg
Front text: "Sorry, we only have a small mailbox. Thank you, Mr. Habakari."

ikezu_post_image_03.jpg
Back text: "Don't put those stupid flyers in here. They're a nuisance."

13-May-24
Personally, I’d still put “Hope” in quotes..

Last month it was the Atlantic, where I pretended to know something about AI. This month it's the MIT Reader, and the subject is The Imminent Collapse of Civilization. Honestly, I had no idea I was such an expert on so many things.

This time, though, I'm not so much an expert as a foil. Dan Brooks (a name long-time readers of this blog may recognize) and Sal Agosta (whose concept of "sloppy fitness" careful readers of my novelette "The Island" may recognize) have written a book called A Darwinian Survival Guide: Hope for the Twenty-First Century. Their definition of "hope" is significantly more restrained than the tech bros and hopepunk authors would like: not once do they suggest, for example, that we could all keep our superyachts if we just put a giant translucent pie plate into space to cut incident sunlight by a few percent. Brooks & Agosta's definition of hope is far more appropriate for a world in which leading climate scientists admit to fury and despair at political inaction, decry living in an "age of fools", and predict by a nearly five-to-one margin that not only is 1.5ºC a pipe dream, but that we'll be blowing past 2.5ºC by century's end. They've internalized the growing number of studies which point to global societal collapse around midcentury. Their idea of hope is taken explicitly from Asimov's Foundation series: not How do we prevent collapse, but How do we come back afterward? That's what their book is about.

Casual observers might see my name where bylines usually go, and conclude that this is somehow my interview. Don't be fooled: the only thing I lay exclusive claim to here is the intro. This is about Dan and Sal. This is their baby; all I did was poke at it from various angles and let Dan react as he would. Our perspectives do largely overlap, but not entirely. (Unlike Dan, I do think the extinction rates we're inflicting on the planet justify the use of the word "crush"—although I take his point that the thing being crushed is only the biosphere as it currently exists, not the biosphere as a dynamic and persistent entity. I also confess to a certain level of bitterness and species-self-loathing that Dan seems to have avoided; I'm pretty certain the biosphere would be better off without us.)

The scene of the Crime.

But there's that word again: hope. Not the starry-eyed denial of reality that infests the Solarpunk Brigade, not the Hope Police's stern imperative that We Must Never Feed A Narrative of Hopelessness and Despair no matter what the facts tell us. Just the suggestion that after everything falls apart—just maybe, if we do things right this time—we might climb back out of the abyss in decades, instead of centuries.

Probably still not what most people want to hear. Still. I'll take what I can get.

So go check it out—keeping in mind, lest you quail at all the articulate erudition on display, that the transcript has been edited to make us look a lot more coherent than we were in real life.

I mean, we were drinking heavily the whole time. What else would you expect, given the subject matter?

01-Apr-24
Joi Ito's Web [ 1-Apr-24 1:10am ]
True North Between the Dragons [ 01-Apr-24 1:10am ]

true north in office.jpeg

This calligraphy mounted on a hanging scroll was written by the Zen monk Sogan Kogetsu, who lived from November 8, 1574 to August 19, 1643, in the Momoyama period in the early Edo period. He was the chief priest of Daitokuji Temple. He was the son of Munenori Tsuda, a wealthy merchant in Sakai who served Oda Nobunaga and Toyotomi Hideyoshi as a tea master. In 1611, he took over the Kuroda family's family temple, sub-temple Ryuko-in which contains, Mittan, a national treasure tea room which I visited last year. Kogetsu's calligraphy is popular for tea ceremony hangings.

The calligraphy characters are: 斗指両辰間 - toshi ryoushin no kan

斗 means "dipper" and refers to the Big Dipper which always points to true north.
指 means "to point".
Together, they mean to point to true north.
両 means "both" and 辰 means "dragon" and "間" means space. "両辰間" mean the space between the two dragons. The two dragons represent extremes in a dichotomy such as good and evil, light and dark. The phrase means that you should find your true north and follow it and navigate between the extremes. The "middle way" is often described in Buddhism.

This hanging scroll is also very appropriate for this year because it is the Year of the Dragon and somehow relevant to my own life. It's currently hanging in the President's office at the Chiba Institute of Technology.

I was reading Souoku Sen's book on Tea recently and he writes about how when you look at the 茶会記 (tea ceremony logs) of the period, they describe the hanging scroll's colors, dimensions, etc. but usually don't record what the scroll actually says or means. It could be that most people couldn't read them. This was a bit heartening for me since Japanese calligraphy is very hard to read and understand, but often rewarding once you do. (Souoku is a descendant of Rikyu and the current head of the Mushanokoji School of Tea.)


explanation of true north.jpgA scroll describing the meaning and interpretation of the the calligraphy written by a Monk.

14-Mar-24
Underdog Overdrive [ 14-Mar-24 8:36pm ]

First, a PSA:

In keeping with my apparent ongoing role as The Guy Who Keeps Getting Asked to Talk About Subjects In Which He Has No Expertise (and for those of you who didn't see the Facebook post), The Atlantic solicited from me a piece on Conscious AI a few months back. The field is moving so fast that's it's probably completely out of date by now, but a few days ago they put it out anyway. (Apologies for the lack of profanity therein. Apparently they have these things called "journalistic standards".)

And Now, Our Feature Presentation:

I am dripping with sweat as I type this. The BUG says I am as red as a cooked lobster, which is telling in light of the fact that the game I have been playing is the only VR game in my collection that you play sitting down. It should be a game for couch potatoes, and I'm sweating as hard as if I'd just done 7k on the treadmill. And I still haven't even made it out of the Killbox. I haven't even got into the city of New Brakka yet, and until I can get into that EM-shielded so-called Last Free City, the vengeful-nanny AI known as Big Sys is gonna keep hacking into my brother's brain until it's nothing but a protein slushy.

The game is Underdogs, the new release from One Hamsa, and it has surprisingly deep lore for a game that consists mainly of robots punching each other in the face. I know—even though I've encountered very little of it in my playthroughs so far—because I wrote a lot of it. After a quarter-century of intermittent gigs in the video game business, on titles ranging from Freemium to Triple-A, this is the first time I've had a hand in a game that's actually made it to market (unless you want to count Crysis 2—which, set in 2022, is now a period piece— and for which I only wrote the novelization based on Richard Morgan's script.)

Underdogs is a weird indie chimera: part graphic novel, part tabletop role-play, part Roguelike mech battle. It's that last element that's the throbbing, face-pounding heart of the artifact of course, the reason you play in the first place. You climb into your mech and hurtle into the KillBox and it's only after an hour of intense metal-smashing physics that you realize you're completely out of breath and your headset is soaked like a dishrag. The other elements are mere connective tissue. Combat happens at night; during daylight hours you're hustling for upgrades and add-ons (stealing's always an option, albeit a risky one), negotiating hacks and sabotage for the coming match, getting into junkyard fights over usable salvage. Sometimes you find something in the rubble that boosts your odds in the ring; sometimes the guy hired to fix your mech fucks up, leaves the machine in worse shape than he found it, and buggers off with your tools. Sometimes you spend the day desperately trying to find parts to fix last night's damage so you won't be going into the ring tonight with a cracked cockpit bubble and one arm missing.

All this interstitial stuff is presented in a kind of interactive 2.5D graphic novel format; the outcomes of street fights or shoplifting gambits are decided via automated dice roll. Dialog unspools via text bubble; economic transactions, via icons and menus. All very stylized, very leisurely. Take your time. Weigh your purchase options carefully. Breathe long, calm, breaths. Gather your strength. You're gonna need it soon enough.

Because when you enter the arena, it's bone-crunching 3D all the way.

Mech battles are a cliché of course, from Evangelion to Pacific Rim. But they're also the perfect format for first-person VR, a conceit that seamlessly resolves one of the biggest problems of the virtual experience. Try punching something in VR. Swing at an enemy with a sword, bash them with a battle-axe. See the problem? No matter how good the physics engine, no matter how smooth the graphics, you don't feel anything. Maybe a bit of haptic vibration if your controllers are set for it— but that hardly replicates the actual impact of steel on bone, fist in face. In VR, all your enemies are weightless.

In a mech, though, you wouldn't feel any of that stuff first-hand anyway. You climb into the cockpit and wrap your fingers around the controllers for those giant robot arms outside the bubble; lo and behold, you can feel that in VR too, because here in meatspace you're actually grabbing real controllers! Now, lift your hands. Bring them down. Smash them together. Watch your arms here in the cockpit; revel in the way those giant mechanical waldos outside mimic their every movement. One Hamsa has built their game around a format in which the haptics of the game and the haptics of the real thing overlap almost completely. It feels satisfying, it feels intuitive. It feels right.

(They've done this before. Their first game, RacketNX, is hyperdimensional racquetball in space: you stand in the center of a honeycomb-geodesic sphere suspended low over a roiling sun, or a ringed planet, or a black hole. The sphere's hexagonal tiles are festooned with everything from energy boosters to wormholes to the moving segments of worm-like entities (in a level called "Shai-Hulud"). You use a tractor-beam-equipped racket to whap a chrome ball against those tiles. But the underlying genius of RacketNX lay not in glorious eye-candy nor the inventive and ever-changing nature of the arena's tiles, but in the simple fact that the player stands on a small platform in the middle of the sphere; you can spin and jump and swing, but you do not move from that central location. With that one brilliant conceit, the devs didn't just sidestep the endemic VR motion sickness that results from the eyes saying I'm moving while the inner ears saying I'm standing still; it actually built that sidestep into the format of the game, made it an intrinsic part of the scenario rather than some kind of arbitrary invisible wall. They repeat that trick here in Underdogs; they turn a limitation in the technology into a seamless part of the world.)

As I said, I haven't got out of the Killbox yet. I've only made it far enough to fight the KillBox champion a couple of times, and I got my ass handed to me both times. This may be partly because I'm old. It probably has more to do with the fact that when you die in this game you go all the way back to square one, and have to go through those first five days of fighting all over again (Roguelike games are permadeath by definition; no candy-ass save-on-demand option here). That's not nearly as repetitive as you might think, though. Thanks to the scavenging, haggling, and backroom deals cut between matches, your mech is widely customizable from match to match. Your hands can consist of claws, wrecking balls, pile-drivers, blades and buzz-saws. Any combination thereof. You can fortify your armor or amp your speed or add stun-gun capability to your strikes. I haven't come close to exploring the various configurations you can bring into the arena, the changes you can make between matches. And while you're only fighting robots up until the championship bout with your first actual mech opponent, there's a fair variety of robots to be fought: roaches and junkyard dogs and weird sparking tetrapods flickering with blue lightning. Little green bombs on legs that scuttle around and try to blow themselves up next to you. And it only adds to the challenge when one wall of the arena slides back to reveal rows of grinding metal teeth ready to shred you if you tip the wrong way.

Apparently there are four arenas total (so far; the game certainly has expansion potential). Apparently the plot takes a serious turn after the second. I'll find out eventually. If you've got that far, please: no spoilers.

*

But I mentioned the Lore.

They brought me in to help with that. They already had the basic premise: Humanity, in its final abrogation of responsibility, has given itself over to an AI nanny called Big Sys who watches over all like a kindly Zuckerborg. There's only one place on earth where Big Sys doesn't reach, the "Last Free City"; an anarchistic free-for-all where artists and malcontents and criminals— basically, anyone who can't live in a nanny state— end up. They do mech fighting there.

My job was to flesh all that stuff out into a world.

So I wrote historical backgrounds and physical infrastructure. I wrote a timeline explaining how we got there from here, how New Gehenna (as I called it then) ended up as the beating heart of the global mech-fighting world. I developed "Tribes"—half gang, half gummint—with their own grudges and ideologies: The Satudarah, the Java men, the Sahelites and the Scarecrows and the Amazons. I gave them territories, control over various vital resources. I built specific characters like I was crafting a D&D party, NPCs the player might encounter both in and out of the ring. I fleshed out a couple of bros from the Basics, the lower-class part of London where surplus Humanity all made do on UBI. Economic and political systems. I wrote thousands of words, forty, fifty single-spaced pages of this stuff.

To give you a taste, this is how one of my backgrounders started off as usual, click to embiggen):

And here's one of my Tables of Contents:

Now you know what I was doing when I wasn't writing Omniscience.

*

I don't know how much of this survived. It's in the nature of the biz that the story changes to serve the game. I'm told the backstory I developed remains foundational to Underdogs; its bones inform what you experience whether they appear explicitly or not. But much has changed: the city is New Brakka, not New Gehenna. The static fields now serve to jam Big Sys, not to keep out the desert heat. (The desert in this world isn't even all that hot any more, thanks to various geoengineering megaprojects that went sideways.) I think I see hints of my Tribes and territories as Rigg and King prowl the backstreets: references to "caveman territory", or to bits of infrastructure I inserted to give the factions something to fight over. Not having even breached the city gates I don't know how many of those Easter eggs might be waiting for me, but assuming I can get out of the KillBox I'm definitely gonna be keeping an eye out. (I think maybe the Spire survived in some form. I'm really, really hoping the Pink Widow did too, but I doubt it.)

Doesn't really matter, though. I love the fact that Underdogs has a fairly deep backstory, and I'm honored to have had even a small part in building it; you could write entire novels set in New Brakka. But this game… this game is definitely not plot driven. It is pure first-person adrenaline dust-up, with a relentless grim and beautiful grotto-punk aesthetic (yes, yet another kind of -punk; deal with it) that saturates everything from the soundtrack to the voice acting to the interactive cut-scenes. I admit I was skeptical of those cut-scenes at first; having steeped myself in Bioshock and Skyrim and The Last Of Us all these years, were comic-book cut-outs really going to do it for me?

But yes. Yes they totally did. Don't take my word for it: check out the Youtube reviews. Go over to Underdog's Steam page, where the hundreds of user reviews are "Overwhelmingly Positive". The only real complaint people have about this game is that there isn’t more of it.

This game was not made for VR: VR was made for this game.

Most of you remain in pancake mode. VR is too expensive, or you get nauseous, or you've erroneously come to equate "VR" with "Oculus" and you (quite rightly) don't want anything to do with the fucking Zuckerborg. But those of you who do have headsets should definitely get this game. Then you should talk to all those other people, show them that the Valve Index is actually superior to the Oculus in a number of ways (and it supports Linux!), and introduce them to Underdogs. Show them what VR can be, when you're not puking all over the floor because you can't get out of smooth-motion mode and no one told you about Natural Locomotion. Underdogs is the best ambassador for VR since, well, RacketNX. It deserves to be a massive hit.

Now I'm gonna take another run at that boss. I'll let you know when I break out.

01-Mar-24
The New Aesthetic [ 7-Feb-24 10:22am ]

An update on the Samsung moon controversy from a Samsung executive:

There was a very nice video by Marques Brownlee last year on the moon picture. Everyone was like, 'Is it fake? Is it not fake?' There was a debate around what constitutes a real picture. And actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you're seeing], and it doesn't mean anything. There is no real picture. You can try to define a real picture by saying, 'I took that picture', but if you used AI to optimize the zoom, the autofocus, the scene - is it real? Or is it all filters? There is no real picture, full stop.

(Ed: of course there is no such thing as “a real picture”. What there is, is a relationship: the person who takes the picture, the subject of the pictures, the context of that picture-taking. Pictures (like everything else) are relationships, not objects, and it is this which is revealed when you start doing weird things like Samsung/AI does to image making.)

A List Apart: The Full Feed [ 29-Feb-24 2:45pm ]
The Wax and the Wane of the Web [ 29-Feb-24 2:45pm ]

I offer a single bit of advice to friends and family when they become new parents: When you start to think that you've got everything figured out, everything will change. Just as you start to get the hang of feedings, diapers, and regular naps, it's time for solid food, potty training, and overnight sleeping. When you figure those out, it's time for preschool and rare naps. The cycle goes on and on.

The same applies for those of us working in design and development these days. Having worked on the web for almost three decades at this point, I've seen the regular wax and wane of ideas, techniques, and technologies. Each time that we as developers and designers get into a regular rhythm, some new idea or technology comes along to shake things up and remake our world.

How we got here

I built my first website in the mid-'90s. Design and development on the web back then was a free-for-all, with few established norms. For any layout aside from a single column, we used table elements, often with empty cells containing a single pixel spacer GIF to add empty space. We styled text with numerous font tags, nesting the tags every time we wanted to vary the font style. And we had only three or four typefaces to choose from: Arial, Courier, or Times New Roman. When Verdana and Georgia came out in 1996, we rejoiced because our options had nearly doubled. The only safe colors to choose from were the 216 "web safe" colors known to work across platforms. The few interactive elements (like contact forms, guest books, and counters) were mostly powered by CGI scripts (predominantly written in Perl at the time). Achieving any kind of unique look involved a pile of hacks all the way down. Interaction was often limited to specific pages in a site.

The birth of web standards

At the turn of the century, a new cycle started. Crufty code littered with table layouts and font tags waned, and a push for web standards waxed. Newer technologies like CSS got more widespread adoption by browsers makers, developers, and designers. This shift toward standards didn't happen accidentally or overnight. It took active engagement between the W3C and browser vendors and heavy evangelism from folks like the Web Standards Project to build standards. A List Apart and books like Designing with Web Standards by Jeffrey Zeldman played key roles in teaching developers and designers why standards are important, how to implement them, and how to sell them to their organizations. And approaches like progressive enhancement introduced the idea that content should be available for all browsers—with additional enhancements available for more advanced browsers. Meanwhile, sites like the CSS Zen Garden showcased just how powerful and versatile CSS can be when combined with a solid semantic HTML structure.

Server-side languages like PHP, Java, and .NET overtook Perl as the predominant back-end processors, and the cgi-bin was tossed in the trash bin. With these better server-side tools came the first era of web applications, starting with content-management systems (particularly in the blogging space with tools like Blogger, Grey Matter, Movable Type, and WordPress). In the mid-2000s, AJAX opened doors for asynchronous interaction between the front end and back end. Suddenly, pages could update their content without needing to reload. A crop of JavaScript frameworks like Prototype, YUI, and jQuery arose to help developers build more reliable client-side interaction across browsers that had wildly varying levels of standards support. Techniques like image replacement let crafty designers and developers display fonts of their choosing. And technologies like Flash made it possible to add animations, games, and even more interactivity.

These new technologies, standards, and techniques reinvigorated the industry in many ways. Web design flourished as designers and developers explored more diverse styles and layouts. But we still relied on tons of hacks. Early CSS was a huge improvement over table-based layouts when it came to basic layout and text styling, but its limitations at the time meant that designers and developers still relied heavily on images for complex shapes (such as rounded or angled corners) and tiled backgrounds for the appearance of full-length columns (among other hacks). Complicated layouts required all manner of nested floats or absolute positioning (or both). Flash and image replacement for custom fonts was a great start toward varying the typefaces from the big five, but both hacks introduced accessibility and performance problems. And JavaScript libraries made it easy for anyone to add a dash of interaction to pages, although at the cost of doubling or even quadrupling the download size of simple websites.

The web as software platform

The symbiosis between the front end and back end continued to improve, and that led to the current era of modern web applications. Between expanded server-side programming languages (which kept growing to include Ruby, Python, Go, and others) and newer front-end tools like React, Vue, and Angular, we could build fully capable software on the web. Alongside these tools came others, including collaborative version control, build automation, and shared package libraries. What was once primarily an environment for linked documents became a realm of infinite possibilities.

At the same time, mobile devices became more capable, and they gave us internet access in our pockets. Mobile apps and responsive design opened up opportunities for new interactions anywhere and any time.

This combination of capable mobile devices and powerful development tools contributed to the waxing of social media and other centralized tools for people to connect and consume. As it became easier and more common to connect with others directly on Twitter, Facebook, and even Slack, the desire for hosted personal sites waned. Social media offered connections on a global scale, with both the good and bad that that entails.

Want a much more extensive history of how we got here, with some other takes on ways that we can improve? Jeremy Keith wrote "Of Time and the Web." Or check out the "Web Design History Timeline" at the Web Design Museum. Neal Agarwal also has a fun tour through "Internet Artifacts."

Where we are now

In the last couple of years, it's felt like we've begun to reach another major inflection point. As social-media platforms fracture and wane, there's been a growing interest in owning our own content again. There are many different ways to make a website, from the tried-and-true classic of hosting plain HTML files to static site generators to content management systems of all flavors. The fracturing of social media also comes with a cost: we lose crucial infrastructure for discovery and connection. Webmentions, RSS, ActivityPub, and other tools of the IndieWeb can help with this, but they're still relatively underimplemented and hard to use for the less nerdy. We can build amazing personal websites and add to them regularly, but without discovery and connection, it can sometimes feel like we may as well be shouting into the void.

Browser support for CSS, JavaScript, and other standards like web components has accelerated, especially through efforts like Interop. New technologies gain support across the board in a fraction of the time that they used to. I often learn about a new feature and check its browser support only to find that its coverage is already above 80 percent. Nowadays, the barrier to using newer techniques often isn't browser support but simply the limits of how quickly designers and developers can learn what's available and how to adopt it.

Today, with a few commands and a couple of lines of code, we can prototype almost any idea. All the tools that we now have available make it easier than ever to start something new. But the upfront cost that these frameworks may save in initial delivery eventually comes due as upgrading and maintaining them becomes a part of our technical debt.

If we rely on third-party frameworks, adopting new standards can sometimes take longer since we may have to wait for those frameworks to adopt those standards. These frameworks—which used to let us adopt new techniques sooner—have now become hindrances instead. These same frameworks often come with performance costs too, forcing users to wait for scripts to load before they can read or interact with pages. And when scripts fail (whether through poor code, network issues, or other environmental factors), there's often no alternative, leaving users with blank or broken pages.

Where do we go from here?

Today's hacks help to shape tomorrow's standards. And there's nothing inherently wrong with embracing hacks—for now—to move the present forward. Problems only arise when we're unwilling to admit that they're hacks or we hesitate to replace them. So what can we do to create the future we want for the web?

Build for the long haul. Optimize for performance, for accessibility, and for the user. Weigh the costs of those developer-friendly tools. They may make your job a little easier today, but how do they affect everything else? What's the cost to users? To future developers? To standards adoption? Sometimes the convenience may be worth it. Sometimes it's just a hack that you've grown accustomed to. And sometimes it's holding you back from even better options.

Start from standards. Standards continue to evolve over time, but browsers have done a remarkably good job of continuing to support older standards. The same isn't always true of third-party frameworks. Sites built with even the hackiest of HTML from the '90s still work just fine today. The same can't always be said of sites built with frameworks even after just a couple years.

Design with care. Whether your craft is code, pixels, or processes, consider the impacts of each decision. The convenience of many a modern tool comes at the cost of not always understanding the underlying decisions that have led to its design and not always considering the impact that those decisions can have. Rather than rushing headlong to "move fast and break things," use the time saved by modern tools to consider more carefully and design with deliberation.

Always be learning. If you're always learning, you're also growing. Sometimes it may be hard to pinpoint what's worth learning and what's just today's hack. You might end up focusing on something that won't matter next year, even if you were to focus solely on learning standards. (Remember XHTML?) But constant learning opens up new connections in your brain, and the hacks that you learn one day may help to inform different experiments another day.

Play, experiment, and be weird! This web that we've built is the ultimate experiment. It's the single largest human endeavor in history, and yet each of us can create our own pocket within it. Be courageous and try new things. Build a playground for ideas. Make goofy experiments in your own mad science lab. Start your own small business. There has never been a more empowering place to be creative, take risks, and explore what we're capable of.

Share and amplify. As you experiment, play, and learn, share what's worked for you. Write on your own website, post on whichever social media site you prefer, or shout it from a TikTok. Write something for A List Apart! But take the time to amplify others too: find new voices, learn from them, and share what they've taught you.

Go forth and make

As designers and developers for the web (and beyond), we're responsible for building the future every day, whether that may take the shape of personal websites, social media tools used by billions, or anything in between. Let's imbue our values into the things that we create, and let's make the web a better place for everyone. Create that thing that only you are uniquely qualified to make. Then share it, make it better, make it again, or make something new. Learn. Make. Share. Grow. Rinse and repeat. Every time you think that you've mastered the web, everything will change.

In reading Joe Dolson's recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, I'm very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.

I'd like you to consider this a "yes… and" piece to complement Joe's post. I'm not trying to refute any of what he's saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, I'm not saying that there aren't real risks or pressing issues with AI that need to be addressed—there are, and we've needed to address them, like, yesterday—but I want to take a little time to talk about what's possible in hopes that we'll get there one day.

Alternative text

Joe's piece spends a lot of time talking about computer-vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer-vision models continue to improve in the quality and richness of detail in their descriptions, their results aren't great. As he rightly points out, the current state of image analysis is pretty poor—especially for certain image types—in large part because current AI systems examine images in isolation rather than within the contexts that they're in (which is a consequence of having separate "foundation" models for text analysis and image analysis). Today's models aren't trained to distinguish between images that are contextually relevant (that should probably have descriptions) and those that are purely decorative (which might not need a description) either. Still, I still think there's potential in this space.

As Joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if AI can pop in to offer a starting point for alt text—even if that starting point might be a prompt saying What is this BS? That's not right at all… Let me try to offer a starting point—I think that's a win.

Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which images are likely to be decorative and which ones likely require a description. That will help reinforce which contexts call for image descriptions and it'll improve authors' efficiency toward making their pages more accessible.

While complex images—like graphs and charts—are challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let's suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year. (That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, let's suppose that that was the description that was in place.) If your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users could ask questions like these about the graphic:

  • Do more people use smartphones or feature phones?
  • How many more?
  • Is there a group of people that don't fall into either of these buckets?
  • How many is that?

Setting aside the realities of large language model (LLM) hallucinations—where a model just makes up plausible-sounding "facts"—for a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts.

Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these tools' chat-based interfaces and our existing ability to manipulate images in today's AI tools, that seems like a possibility.

Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart (or better yet, a series of pie charts) into more accessible (and useful) formats, like spreadsheets. That would be amazing!

Matching algorithms

Safiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that it's equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether it's Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, there's real potential for algorithm development to help people with disabilities.

Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidate's strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things.

When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. That's why diverse teams are so important.

Imagine that a social media company's recommendation engine was tuned to analyze who you're following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or aren't white or aren't male who also talk about AI. If you took its recommendations, perhaps you'd get a more holistic and nuanced understanding of what's happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren't recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward) those groups.

Other ways that AI can helps people with disabilities

If I weren't trying to put this together between other tasks, I'm sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but I'm going to make this last section into a bit of a lightning round. In no particular order:

  • Voice preservation. You may have seen the VALL-E paper or Apple's Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. It's possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS (Lou Gehrig's disease) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so it's something that we need to approach responsibly, but the tech has truly transformative potential.
  • Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinson's and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice.
  • Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text that's prepped for Bionic Reading.
The importance of diverse teams and data

We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiences—with all their complexities (and joys and pain)—are valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes.

Want a model that doesn't demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities that's authored by people with a range of disabilities, and make sure that that's well represented in the training data.

Want a model that doesn't use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models won't be replacing human copy editors anytime soon. 

Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible.


I have no doubt that AI can and will harm people… today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.


Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt.

06-Feb-24
The New Aesthetic [ 14-Jan-24 4:43pm ]

Amazon Is Selling Products With AI-Generated Names Like “I Cannot Fulfill This Request It Goes Against OpenAI Use Policy”

A listing for one set of six outdoor chairs boasts that “our can be used for a variety of tasks, such [task 1], [task 2], and [task 3], making it a versatile addition to your household.”

“HELP! Guys I'm losing my mind, the lady in the wedding dress shop took this photo of me! I swear on my life this is real, what?! HOW?! Somebody please help!!! I'm freaking out” - TessaCoates on Twitter

[Likely cause of this image is that the iPhone used actually takes several images to compensate for low light conditions, and then stitches them together, producing a moment which never happened. See also: Google Best Take.]

[AP News Story] [ 25-Oct-23 6:16am ]

stml:

The climate crisis is not, in fact, a mystery or a riddle we haven't yet solved due to insufficiently robust data sets. We know what it would take, but it's not a quick fix - it's a paradigm shift. Waiting for machines to spit out a more palatable and/or profitable answer is not a cure for this crisis, it's one more symptom of it.

A woman living in Kenya's Dadaab, which is among the world's largest refugee camps, wanders across the vast, dusty site to a central hut lined with computers. Like many others who have been brutally displaced and then warehoused at the margins of our global system, her days are spent toiling away for a new capitalist vanguard thousands of miles away in Silicon Valley. A day's work might include labelling videos, transcribing audio, or showing algorithms how to identify various photos of cats. 

Amid a drought of real employment, "clickwork" represents one of few formal options for Dadaab's residents, though the work is volatile, arduous, and, when waged, paid by the piece. Cramped and airless workspaces, festooned with a jumble of cables and loose wires, are the antithesis to the near-celestial campuses where the new masters of the universe reside. 

Each task represents a stretching of the gulf between the vast and growing ghettos of disposable life and a capitalist vanguard of intelligent bots and billionaire tycoons. The barbaric and sublime bound in a single click.

The same economy of clicks determines the fates of refugees across the Middle East. Forced to adapt their sleeping patterns to meet the needs of firms on the other side of the planet and in different time zones, the largely Syrian population of Lebanon's Shatila camp forgo their dreams to serve those of distant capitalists. Their nights are spent labeling footage of urban areas — house," "shop," "car" — labels that, in a grim twist of fate, map the streets where the labelers once lived, perhaps for automated drone systems that will later drop their payloads on those very same streets. The sites on which they labor are so opaque that it is impossible to establish with any certainty the precise purpose or beneficiaries of their work. 

Refugees help power machine learning advances at Microsoft, Facebook, and Amazon

the hauntological society [ 14-Sep-23 8:13pm ]

Paperhouse is a 1988 British dark fantasy film directed by Bernard Rose. It was based on the 1958 novel Marianne Dreams by Catherine Storr. The film stars Ben Cross, Glenne Headly and Gemma Jones. The original novel was the basis of a six-episode British TV series for children in the early 1970s which was titled Escape Into Night.

While suffering from glandular fever, 11-year-old Anna Madden draws a house. When she falls asleep, she has disturbing dreams in which she finds herself inside the house she has drawn. After she draws a face at the window, in her next dream she finds Marc, a boy who suffers with muscular dystrophy, living in the house. She learns from her doctor that Marc is a real person.

Anna sketches her father into the drawing so that he can help carry Marc away, but she inadvertently gives him an angry expression which she then crosses out, and the father (who has been away a lot and has a drinking problem, putting a strain on his marriage) appears in the dream as a furious, blinded ogre. Anna and Marc defeat the monster and shortly afterward Anna recovers, although the doctor reveals that Marc’s condition is deteriorating.

Anna’s father returns home and both parents seem determined to get over their marital difficulties. The family goes on holiday by the sea, where Anna finds an epilogue to her dream.

Charlotte Burke - Anna Madden, Ben Cross - Dad, Glenne Headly - Kate Madden, Elliott Spiers - Marc, Gemma Jones - Dr. Sarah Nichols, Jane Bertish - Miss Vanstone, Samantha Cahill - Sharon, Sarah Newbold - Karen.

Marianne Dreams is a children’s fantasy novel by Catherine Storr. It was illustrated with drawings by Marjorie-Ann Watts and published by Faber and Faber in 1958. The first paperback edition, from Puffin Books in 1964, is catalogued by the Library of Congress as revised.

Marianne is a young girl who is bedridden with a long-term illness. She draws a picture to fill her time and finds that she spends her dreams within the picture she has drawn. As time goes by, she becomes sicker, and starts to spend more and more time trapped within her fantasy world, and her attempts to make things better by adding to and crossing out things in the drawing make things progressively worse. Her only companion in her dreamworld is a boy called Mark, who is also a long-term invalid in the real world.

Yoshihiro Francis Fukuyama (born October 27, 1952) is an American political scientist, political economist, and author. He is a Senior Fellow at the Center on Democracy, Development and the Rule of Law at Stanford. He is best-known for his book The End of History and the Last Man (1992).

By way of an Introduction:

The distant origins of the present volume lie in an article entitled "The End of History?" which I wrote for the journal The National Interest in the summer of 1989. In it, I argued that a remarkable consensus concerning the legitimacy of liberal democracy as a system of government had emerged throughout the world over the past few years, as it conquered rival ideologies like hereditary monarchy, fascism, and most recently communism. More than that, however, I argued that liberal democracy may constitute the "end point of mankind's ideological evolution" and the "final form of human government", and as such constituted the "end of history". That is, while earlier forms of government were characterised by grave defects and irrationalities that led to their eventual collapse, liberal democracy was arguably free from such fundamental internal contradictions. This was not to say that today's stable democracies, like the United States, France, or Switzerland, were not without injustice or serious social problems. But these problems were ones of incomplete implementation of the twin principles of liberty and equality on which modern democracy is founded, rather than of flaws in the principles themselves. While some present-day countries might fail to achieve stable liberal democracy, and others might lapse back into other, more primitive forms of rule like theocracy or military dictatorship, the ideal of liberal democracy could not be improved on.

The original article excited an extraordinary amount of commentary and controversy, first in the United States, and then in a series of countries as different as England, France, Italy, the Soviet Union, Brazil, South Africa, Japan, and South Korea. Criticism took every conceivable form, some of it based on simple misunderstanding of my original intent, and others penetrating more perceptively to the core of my argument. Many people were confused in the first instance by my use of the word "history". Understanding history in a conventional sense as the occurrence of events, people pointed to the fall of the Berlin Wall, the Chinese communist crackdown in Tiananmen Square, and the Iraqi invasion of Kuwait as evidence that "history was continuing", and that I was ipso facto proven wrong.

And yet what I suggested had come to an end was not the occurrence of events, even large and grave events, but History: that is, history understood as a single, coherent, evolutionary process, when taking into account the experience of all peoples in all times. This understanding of History was most closely associated with the great German philosopher G. W. F. Hegel. It was made part of our daily intellectual atmosphere by Karl Marx, who borrowed this concept of History from Hegel, and is implicit in our use of words like "primitive" or "advanced," "traditional" or "modern", when referring to different types of human societies. For both of these thinkers, there was a coherent development of human societies from simple tribal ones based on slavery and subsistence agriculture, through various theocracies, monarchies, and feudal aristocracies, up through modern liberal democracy and technologically driven capitalism. This evolutionary process was neither random nor unintelligible, even if it did not proceed in a straight line, and even if it was possible to question whether man was happier or better off as a result of historical "progress".

Both Hegel and Marx believed that the evolution of human societies was not open-ended, but would end when mankind had achieved a form of society that satisfied its deepest and most fundamental longings. Both thinkers thus posited an "end of history": for Hegel this was the liberal state, while for Marx it was a communist society. This did not mean that the natural cycle of birth, life, and death would end, that important events would no longer happen, or that newspapers reporting them would cease to be published. It meant, rather, that there would be no further progress in the development of underlying principles and institutions, because all of the really big questions had been settled.

The present book is not a restatement of my original article, nor is it an effort to continue the discussion with that article's many critics and commentators. Least of all is it an account of the end of the Cold War, or any other pressing topic in contemporary politics. While this book is informed by recent world events, its subject returns to a very old question: Whether, at the end of the twentieth century, it makes sense for us once again to speak of a coherent and directional History of mankind that will eventually lead the greater part of humanity to liberal democracy? The answer I arrive at is yes, for two separate reasons. One has to do with economics, and the other has to do with what is termed the "struggle for recognition".

It is of course not sufficient to appeal to the authority of Hegel, Marx, or any of their contemporary followers to establish the validity of a directional History. In the century and a half since they wrote, their intellectual legacy has been relentlessly assaulted from all directions. The most profound thinkers of the twentieth century have directly attacked the idea that history is a coherent or intelligible process; indeed, they have denied the possibility that any aspect of human life is philosophically intelligible. We in the West have become thoroughly pessimistic with regard to the possibility of overall progress in democratic institutions. This profound pessimism is not accidental, but born of the truly terrible political events of the first half of the twentieth century - two destructive world wars, the rise of totalitarian ideologies, and the turning of science against man in the form of nuclear weapons and environmental damage. The life experiences of the victims of this past century's political violence - from the survivors of Hitlerism and Stalinism to the victims of Pol Pot - would deny that there has been such a thing as historical progress. Indeed, we have become so accustomed by now to expect that the future will contain bad news with respect to the health and security of decent, liberal, democratic political practices that we have problems recognising good news when it comes.

And yet, good news has come. The most remarkable development of the last quarter of the twentieth century has been the revelation of enormous weaknesses at the core of the world's seemingly strong dictatorships, whether they be of the military-authoritarian Right, or the communist-totalitarian Left. From Latin America to Eastern Europe, from the Soviet Union to the Middle East and Asia, strong governments have been failing over the last two decades. And while they have not given way in all cases to stable liberal democracies, liberal democracy remains the only coherent political aspiration that spans different regions and cultures around the globe. In addition, liberal principles in economics - the "free market" - have spread, and have succeeded in producing unprecedented levels of material prosperity, both in industrially developed countries and in countries that had been, at the close of World War II, part of the impoverished Third World. A liberal revolution in economic thinking has sometimes preceded, sometimes followed, the move toward political freedom around the globe.

All of these developments, so much at odds with the terrible history of the first half of the century when totalitarian governments of the Right and Left were on the march, suggest the need to look again at the question of whether there is some deeper connecting thread underlying them, or whether they are merely accidental instances of good luck. By raising once again the question of whether there is such a thing as a Universal History of mankind, I am resuming a discussion that was begun in the early nineteenth century, but more or less abandoned in our time because of the enormity of events that mankind has experienced since then. While drawing on the ideas of philosophers like Kant and Hegel who have addressed this question before, I hope that the arguments presented here will stand on their own.

This volume immodestly presents not one but two separate efforts to outline such a Universal History. After establishing in Part I why we need to raise once again the possibility of Universal History, I propose an initial answer in Part II by attempting to use modern natural science as a regulator or mechanism to explain the directionality and coherence of History. Modern natural science is a useful starting point because it is the only important social activity that by common consensus is both cumulative and directional, even if its ultimate impact on human happiness is ambiguous. The progressive conquest of nature made possible with the development of the scientific method in the sixteenth and seventeenth centuries has proceeded according to certain definite rules laid down not by man, but by nature and nature's laws.

The unfolding of modern natural science has had a uniform effect on all societies that have experienced it, for two reasons. In the first place, technology confers decisive military advantages on those countries that possess it, and given the continuing possibility of war in the international system of states, no state that values its independence can ignore the need for defensive modernisation. Second, modern natural science establishes a uniform horizon of economic production possibilities. Technology makes possible the limitless accumulation of wealth, and thus the satisfaction of an ever-expanding set of human desires. This process guarantees an increasing homogenisation of all human societies, regardless of their historical origins or cultural inheritances. All countries undergoing economic modernisation must increasingly resemble one another: they must unify nationally on the basis of a centralised state, urbanise, replace traditional forms of social organisation like tribe, sect, and family with economically rational ones based on function and efficiency, and provide for the universal education of their citizens. Such societies have become increasingly linked with one another through global markets and the spread of a universal consumer culture. Moreover, the logic of modern natural science would seem to dictate a universal evolution in the direction of capitalism. The experiences of the Soviet Union, China, and other socialist countries indicate that while highly centralised economies are sufficient to reach the level of industrialisation represented by Europe in the 1950s, they are woefully inadequate in creating what have been termed complex "post-industrial" economies in which information and technological innovation play a much larger role.

But while the historical mechanism represented by modern natural science is sufficient to explain a great deal about the character of historical change and the growing uniformity of modern societies, it is not sufficient to account for the phenomenon of democracy. There is no question but that the world's most developed countries are also its most successful democracies. But while modern natural science guides us to the gates of the Promised Land of liberal democracy, it does not deliver us to the Promised Land itself, for there is no economically necessary reason why advanced industrialisation should produce political liberty. Stable democracy has at times emerged in pre-industrial societies, as it did in the United States in 1776. On the other hand, there are many historical and contemporary examples of technologically advanced capitalism coexisting with political authoritarianism from Meiji Japan and Bismarckian Germany to present-day Singapore and Thailand. In many cases, authoritarian states are capable of producing rates of economic growth unachievable in democratic societies.

Our first effort to establish the basis for a directional history is thus only partly successful. What we have called the "logic of modern natural science" is in effect an economic interpretation of historical change, but one which (unlike its Marxist variant) leads to capitalism rather than socialism as its final result. The logic of modern science can explain a great deal about our world: why we residents of developed democracies are office workers rather than peasants eking out a living on the land, why we are members of labor unions or professional organisations rather than tribes or clans, why we obey the authority of a bureaucratic superior rather than a priest, why we are literate and speak a common national language.

But economic interpretations of history are incomplete and unsatisfying, because man is not simply an economic animal. In particular, such interpretations cannot really explain why we are democrats, that is, proponents of the principle of popular sovereignty and the guarantee of basic rights under a rule of law. It is for this reason that the book turns to a second, parallel account of the historical process in Part III, an account that seeks to recover the whole of man and not just his economic side. To do this, we return to Hegel and Hegel's non-materialist account of History, based on the "struggle for recognition".

According to Hegel, human beings like animals have natural needs and desires for objects outside themselves such as food, drink, shelter, and above all the preservation of their own bodies. Man differs fundamentally from the animals, however, because in addition he desires the desire of other men, that is, he wants to be "recognised." In particular, he wants to be recognised as a human being, that is, as a being with a certain worth or dignity. This worth in the first instance is related to his willingness to risk his life in a struggle over pure prestige. For only man is able to overcome his most basic animal instincts - chief among them his instinct for self-preservation - for the sake of higher, abstract principles and goals. According to Hegel, the desire for recognition initially drives two primordial combatants to seek to make the other "recognise" their humanness by staking their lives in a mortal battle. When the natural fear of death leads one combatant to submit, the relationship of master and slave is born. The stakes in this bloody battle at the beginning of history are not food, shelter, or security, but pure prestige. And precisely because the goal of the battle is not determined by biology, Hegel sees in it the first glimmer of human freedom.

The desire for recognition may at first appear to be an unfamiliar concept, but it is as old as the tradition of Western political philosophy, and constitutes a thoroughly familiar part of the human personality. It was first described by Plato in the Republic, when he noted that there were three parts to the soul, a desiring part, a reasoning part, and a part that he called thymos, or "spiritedness." Much of human behaviour can be explained as a combination of the first two parts, desire and reason: desire induces men to seek things outside themselves, while reason or calculation shows them the best way to get them. But in addition, human beings seek recognition of their own worth, or of the people, things, or principles that they invest with worth. The propensity to invest the self with a certain value, and to demand recognition for that value, is what in today's popular language we would call "self-esteem." The propensity to feel self-esteem arises out of the part of the soul called thymos. It is like an innate human sense of justice. People believe that they have a certain worth, and when other people treat them as though they are worth less than that, they experience the emotion of anger. Conversely, when people fail to live up to their own sense of worth, they feel shame, and when they are evaluated correctly in proportion to their worth, they feel pride. The desire for recognition, and the accompanying emotions of anger, shame, and pride, are parts of the human personality critical to political life. According to Hegel, they are what drives the whole historical process.

By Hegel's account, the desire to be recognised as a human being with dignity drove man at the beginning of history into a bloody battle to the death for prestige. The outcome of this battle was a division of human society into a class of masters, who were willing to risk their lives, and a class of slaves, who gave in to their natural fear of death. But the relationship of lordship and bondage, which took a wide variety of forms in all of the unequal, aristocratic societies that have characterised the greater part of human history, failed ultimately to satisfy the desire for recognition of either the masters or the slaves. The slave, of course, was not acknowledged as a human being in any way whatsoever. But the recognition enjoyed by the master was deficient as well, because he was not recognised by other masters, but slaves whose humanity was as yet incomplete. Dissatisfaction with the flawed recognition available in aristocratic societies constituted a "contradiction" that engendered further stages of history.

Hegel believed that the "contradiction" inherent in the relationship of lordship and bondage was finally overcome as a result of the French and, one would have to add, American revolutions. These democratic revolutions abolished the distinction between master and slave by making the former slaves their own masters and by establishing the principles of popular sovereignty and the rule of law. The inherently unequal recognition of masters and slaves is replaced by universal and reciprocal recognition, where every citizen recognises the dignity and humanity of every other citizen, and where that dignity is recognised in turn by the state through the granting of rights.

This Hegelian understanding of the meaning of contemporary liberal democracy differs in a significant way from the Anglo-Saxon understanding that was the theoretical basis of liberalism in countries like Britain and the United States. In that tradition, the prideful quest for recognition was to be subordinated to enlightened self-interest - desire combined with reason - and particularly the desire for self-preservation of the body. While Hobbes, Locke, and the American Founding Fathers like Jefferson and Madison believed that rights to a large extent existed as a means of preserving a private sphere where men can enrich themselves and satisfy the desiring parts of their souls, Hegel saw rights as ends in themselves, because what truly satisfies human beings is not so much material prosperity as recognition of their status and dignity. With the American and French revolutions, Hegel asserted that history comes to an end because the longing that had driven the historical process - the struggle for recognition - has now been satisfied in a society characterised by universal and reciprocal recognition. No other arrangement of human social institutions is better able to satisfy this longing, and hence no further progressive historical change is possible.

The desire for recognition, then, can provide the missing link between liberal economics and liberal politics that was missing from the economic account of History in Part II. Desire and reason are together sufficient to explain the process of industrialisation, and a large part of economic life more generally. But they cannot explain the striving for liberal democracy, which ultimately arises out of thymos, the part of the soul that demands recognition. The social changes that accompany advanced industrialisation, in particular universal education, appear to liberate a certain demand for recognition that did not exist among poorer and less educated people. As standards of living increase, as populations become more cosmopolitan and better educated, and as society as a whole achieves a greater equality of condition, people begin to demand not simply more wealth but recognition of their status. If people were nothing more than desire and reason, they would be content to live in market-oriented authoritarian states like Franco's Spain, or a South Korea or Brazil under military rule. But they also have a thymotic pride in their own self-worth, and this leads them to demand democratic governments that treat them like adults rather than children, recognising their autonomy as free individuals. Communism is being superseded by liberal democracy in our time because of the realisation that the former provides a gravely defective form of recognition.

An understanding of the importance of the desire for recognition as the motor of history allows us to reinterpret many phenomena that are otherwise seemingly familiar to us, such as culture, religion, work, nationalism, and war. Part IV is an attempt to do precisely this, and to project into the future some of the different ways that the desire for recognition will be manifest. A religious believer, for example, seeks recognition for his particular gods or sacred practices, while a nationalist demands recognition for his particular linguistic, cultural, or ethnic group. Both of these forms of recognition are less rational than the universal recognition of the liberal state, because they are based on arbitrary distinctions between sacred and profane, or between human social groups. For this reason, religion, nationalism, and a people's complex of ethical habits and customs (more broadly "culture") have traditionally been interpreted as obstacles to the establishment of successful democratic political institutions and free-market economies.

But the truth is considerably more complicated, for the success of liberal politics and liberal economics frequently rests on irrational forms of recognition that liberalism was supposed to overcome. For democracy to work, citizens need to develop an irrational pride in their own democratic institutions, and must also develop what Tocqueville called the "art of associating," which rests on prideful attachment to small communities. These communities are frequently based on religion, ethnicity, or other forms of recognition that fall short of the universal recognition on which the liberal state is based. The same is true for liberal economics. Labor has traditionally been understood in the Western liberal economic tradition as an essentially unpleasant activity undertaken for the sake of the satisfaction of human desires and the relief of human pain. But in certain cultures with a strong work ethic, such as that of the Protestant entrepreneurs who created European capitalism, or of the elites who modernised Japan after the Meiji restoration, work was also undertaken for the sake of recognition. To this day, the work ethic in many Asian countries is sustained not so much by material incentives, as by the recognition provided for work by overlapping social groups, from the family to the nation, on which these societies are based. This suggests that liberal economics succeeds not simply on the basis of liberal principles, but requires irrational forms of thymos as well.

The struggle for recognition provides us with insight into the nature of international politics. The desire for recognition that led to the original bloody battle for prestige between two individual combatants leads logically to imperialism and world empire. The relationship of lordship and bondage on a domestic level is naturally replicated on the level of states, where nations as a whole seek recognition and enter into bloody battles for supremacy. Nationalism, a modern yet not-fully-rational form of recognition, has been the vehicle for the struggle for recognition over the past hundred years, and the source of this century's most intense conflicts. This is the world of "power politics," described by such foreign policy "realists" as Henry Kissinger.

But if war is fundamentally driven by the desire for recognition, it stands to reason that the liberal revolution which abolishes the relationship of lordship and bondage by making former slaves their own masters should have a similar effect on the relationship between states. Liberal democracy replaces the irrational desire to be recognised as greater than others with a rational desire to be recognised as equal. A world made up of liberal democracies, then, should have much less incentive for war, since all nations would reciprocally recognise one another's legitimacy. And indeed, there is substantial empirical evidence from the past couple of hundred years that liberal democracies do not behave imperialistically toward one another, even if they are perfectly capable of going to war with states that are not democracies and do not share their fundamental values. Nationalism is currently on the rise in regions like Eastern Europe and the Soviet Union where peoples have long been denied their national identities, and yet within the world's oldest and most secure nationalities, nationalism is undergoing a process of change. The demand for national recognition in Western Europe has been domesticated and made compatible with universal recognition, much like religion three or four centuries before.

The fifth and final part of this book addresses the question of the "end of history," and the creature who emerges at the end, the "last man." In the course of the original debate over the National Interest article, many people assumed that the possibility of the end of history revolved around the question of whether there were viable alternatives to liberal democracy visible in the world today. There was a great deal of controversy over such questions as whether communism was truly dead, whether religion or ultranationalism might make a comeback, and the like. But the deeper and more profound question concerns the goodness of Liberal democracy itself, and not only whether it will succeed against its present-day rivals. Assuming that liberal democracy is, for the moment, safe from external enemies, could we assume that successful democratic societies could remain that way indefinitely? Or is liberal democracy prey to serious internal contradictions, contradictions so serious that they will eventually undermine it as a political system? There is no doubt that contemporary democracies face any number of serious problems, from drugs, homelessness and crime to environmental damage and the frivolity of consumerism. But these problems are not obviously insoluble on the basis of liberal principles, nor so serious that they would necessarily lead to the collapse of society as a whole, as communism collapsed in the 1980s.

Writing in the twentieth century, Hegel's great interpreter, Alexandre Kojève, asserted intransigently that history had ended because what he called the "universal and homogeneous state" - what we can understand as liberal democracy - definitely solved the question of recognition by replacing the relationship of lordship and bondage with universal and equal recognition. What man had been seeking throughout the course of history - what had driven the prior "stages of history" - was recognition. In the modern world, he finally found it, and was "completely satisfied." This claim was made seriously by Kojève, and it deserves to be taken seriously by us. For it is possible to understand the problem of politics over the millennia of human history as the effort to solve the problem of recognition. Recognition is the central problem of politics because it is the origin of tyranny, imperialism, and the desire to dominate. But while it has a dark side, it cannot simply be abolished from political life, because it is simultaneously the psychological ground for political virtues like courage, public-spiritedness, and justice. All political communities must make use of the desire for recognition, while at the same time protecting themselves from its destructive effects. If contemporary constitutional government has indeed found a formula whereby all are recognised in a way that nonetheless avoids the emergence of tyranny, then it would indeed have a special claim to stability and longevity among the regimes that have emerged on earth.

But is the recognition available to citizens of contemporary liberal democracies "completely satisfying?" The long-term future of liberal democracy, and the alternatives to it that may one day arise, depend above all on the answer to this question. In Part V we sketch two broad responses, from the Left and the Right, respectively. The Left would say that universal recognition in liberal democracy is necessarily incomplete because capitalism creates economic inequality and requires a division of labor that ipso facto implies unequal recognition. In this respect, a nation's absolute level of prosperity provides no solution, because there will continue to be those who are relatively poor and therefore invisible as human beings to their fellow citizens. Liberal democracy, in other words, continues to recognise equal people unequally.

The second, and in my view more powerful, criticism of universal recognition comes from the Right that was profoundly concerned with the leveling effects of the French Revolution's commitment to human equality. This Right found its most brilliant spokesman in the philosopher Friedrich Nietzsche, whose views were in some respects anticipated by that great observer of democratic societies, Alexis de Tocqueville. Nietzsche believed that modern democracy represented not the self-mastery of former slaves, but the unconditional victory of the slave and a kind of slavish morality. The typical citizen of a liberal democracy was a "last man" who, schooled by the founders of modern liberalism, gave up prideful belief in his or her own superior worth in favour of comfortable self-preservation. Liberal democracy produced "men without chests," composed of desire and reason but lacking thymos, clever at finding new ways to satisfy a host of petty wants through the calculation of long-term self-interest. The last man had no desire to be recognised as greater than others, and without such desire no excellence or achievement was possible. Content with his happiness and unable to feel any sense of shame for being unable to rise above those wants, the last man ceased to be human.

Following Nietzsche's line of thought, we are compelled to ask the following questions: Is not the man who is completely satisfied by nothing more than universal and equal recognition something less than a full human being, indeed, an object of contempt, a "last man" with neither striving nor aspiration? Is there not a side of the human personality that deliberately seeks out struggle, danger, risk, and daring, and will this side not remain unfulfilled by the "peace and prosperity" of contemporary liberal democracy? Does not the satisfaction of certain human beings depend on recognition that is inherently unequal? Indeed, does not the desire for unequal recognition constitute the basis of a livable life, not just for bygone aristocratic societies, but also in modern liberal democracies? Will not their future survival depend, to some extent, on the degree to which their citizens seek to be recognised not just as equal, but as superior to others? And might not the fear of becoming contemptible "last men" not lead men to assert themselves in new and unforeseen ways, even to the point of becoming once again bestial "first men" engaged in bloody prestige battles, this time with modern weapons?

This books seeks to address these questions. They arise naturally once we ask whether there is such a thing as progress, and whether we can construct a coherent and directional Universal History of mankind. Totalitarianisms of the Right and Left have kept us too busy to consider the latter question seriously for the better part of this century. But the fading of these totalitarianisms, as the century comes to an end, invites us to raise this old question one more time.

Rudy's Blog [ 28-Jan-24 2:16am ]

This describes an experiment with with ChatGPT that I made on Jan 17, 2024. I mention "skinks" a lot, so just below, you'll see a painting of with a bunch of skinks; they're the guys in the center. I'm writing about a couple named Oliver and Carol, and about their adventures with two skinks named […]

The post ChatGPT Writing. "Skinks in Rubber Flight." first appeared on Rudy's Blog.

We keep getting closer to simulating a human being by running a process in a computer. On the street, the dream is that we might get software immortality. And I predicted it forty years ago. [Photo credit: Bart Nagel] How does it work? You start with a large data base on what a given person […]

The post Rudy Predicted Software Immortality first appeared on Rudy's Blog.

Notes on Gibson's Sprawl Trilogy [ 10-Dec-23 12:05am ]

As I've mentioned, my wife Sylvia died eleven months ago, and I have to find ways to fill the empty time at home. Somehow I still haven't gotten back to writing stories and novels. I paint in the daytime, and I watch a fair amount of TV in the evening, but I get sick of […]

The post Notes on Gibson's Sprawl Trilogy first appeared on Rudy's Blog.

Rudy. I met John Walker in 1987, shortly after I moved to Silicon Valley, at an outsider get-together called Hackers. John is known as one of the founders of the behemoth company Autodesk. I had a job teaching computer science at San Jose State, although at this point I was totally faking it. Even so, […]

The post "The Roaring Twenties" Rudy & John Walker on LLM, ChatGPT, & AI first appeared on Rudy's Blog.

Journey to the East [ 04-Nov-23 7:27pm ]

Journey to the East! I'm gonna spend a four nights with Mike Gambone in Nashville. Mike and I taught at Randolph Macon Woman's College, sharing careers as low-grade malcontent academics. After Mike, I'll visit Greg Gibson in Gloucester, my college roommate at Swarthmore, and a fellow writer and eternal Zen mind-assassin. Then on to Providence […]

The post Journey to the East first appeared on Rudy's Blog.

Talking to Sylvia [ 27-Sep-23 9:37pm ]

As you may know, my wife Sylvia died of cancer in January 6, 2023. I'm still grieving, and I miss her very much. Over the last eight months, I've returned many times to the question of what it might mean to say Sylvia's soul is still with me. In this post, I'll outline some of […]

The post Talking to Sylvia first appeared on Rudy's Blog.

Lauren Weinstein's Blog [ 16-Dec-23 5:52pm ]
About Google and Location Privacy [ 16-Dec-23 5:52pm ]

You may have seen a lot of press over the last few days about Google moving location data by default to be on-device (e.g., your phone) rather than stored centrally (and encrypted if you choose to store it centrally), and how this will help prevent abuses of broad “geofence” warrants that law enforcement uses to get broad data about devices in a particular specified area.

These are all positive moves by Google, but keep in mind that Google has long provided users with control over their location history — how long it’s kept, the ability for users to delete it manually, whether it’s kept at all, etc.

But when is the last time your mobile carrier offered you any control over the detailed data they collect on your devices’ movements? If you’re like most people, the answer seems to be never. And while cellular tracking may not usually be as precise as GPS, these days it can be remarkably accurate.

One wonders why there’s all this talk about Google, when the mobile carriers are collecting so much location data that users seem to have no control over at all, data that is of similar interest to law enforcement for mass geofence warrants, one might assume.

Think about it.

–Lauren–

As you may know, Google has recently begun a protocol to delete inactive Google accounts, with email notices going out to the account and recovery addresses in advance as a warning.

Leaving aside for the moment the issue that so many people who have lost track of accounts probably have no recovery address specified (or an old one that no longer reaches them), there’s another serious problem.

A few days ago I received a legitimate Google email about an older Google account of mine that I haven’t used in some time. I was able to quickly reauthenticate it and bring it back to active status.

However, this may be the first situation (there may be earlier ones, but I can’t think of any offhand) where Google is actively “out of the blue” soliciting people to log into their accounts (and typically, older accounts that I suspect are more likely not to have 2-factor authentication enabled, for example).

This is creating an ideal template for phishing attacks.

We’ve long strongly urged users not to respond to emailed efforts to get them to provide their login credentials when they have not taken any specific actions that would trigger the need for logging in again — and of course this is a very common phishing technique (“You need to verify your account — click here.” “Your password is expiring — click here.”, etc.)

Unfortunately, this is essentially the form of the Google “reactivate your account” email notice. And for ordinary busy users who may get confused to see one of these pop into their inbox suddenly, they may either ignore them thinking that they are a phishing attack (and so ultimately lose their account and data), or may fall victim to similar appearing phishes leveraging the fact that Google is now sending these out.

I’ve already seen such a phish, claiming to be Google prompting with a link for a login to a supposedly inactive account. So this scenario is already occurring. The format looked good, and it was forged to appear to be from the same Google address as used for the legitimate Google inactive account notification emails.  Even the internal headers had been forged to make it appear to be from  Google. The top level “Received from” header line IP address was wrong of course, but how many people would notice this or even look at the headers to see this in the first place?

I can think of some ways to help mitigate these risks, but as this stands right now I am definitely very concerned. 

–Lauren–

Last February, in:

Giving Creators and Websites Control Over Generative AI
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

I suggested expansion of the existing Robots Exclusion Protocol (e.g. “robots.txt”) as a path toward helping provide websites and creators control over how their contents are used by AI systems.

Shortly thereafter, Google publicly announced their own support for the robots.txt methodology as a useful mechanism in these contexts.

While it’s true that adherence to robots.txt (or related webpage Meta tags — also part of the Robots Exclusion Protocol) is voluntary, my view is that most large firms do honor its directives, and if ultimately moves toward a regulatory approach to this were deemed genuinely necessary, a more formal approach would be a possible option.

This morning Google ran a livestream discussing their progress in this entire area, emphasizing that we’re only at the beginning of a long road, and asking for a wide range of stakeholder inputs.

I believe of particular importance is Google’s desire for these content control systems to be as technologically straightforward as possible (so, building on the existing Robots Exclusion Protocol is clearly desirable rather than creating something entirely new), and for the effort to be industry-wide, not restricted to or controlled by only a few firms.

Also of note is Google’s endorsement of the excellent “AI taxonomy” concept for consideration in these regards. Essentially, the idea is that AI Web crawling exclusions could be specified by the type of use involved, rather than by which entity was doing the crawling. So, a set of directives could be defined that would apply to all AI-related crawlers, irrespective of who was doing the crawling, but permitting (for example) crawlers that are looking for content related to public interest AI research to proceed, but direct that content not be taken or used for commercial Generative AI chatbot systems.

Again, these are of course only the first few steps toward scalable solutions in this area, but this is all incredibly important, and I definitely support Google’s continuing progress in these regards.

–Lauren–

As per requests, this is a transcript of my national network radio report earlier this week regarding Google passkeys and Google account recovery concerns.

 – – –

So there really isn’t enough time tonight to get into any real details on this but I think it’s important that folks at least know what’s going on if this pops up in front of them. Various firms now are moving to eliminate passwords on accounts by using a technology called “passkeys” which bind account authentication to specific devices rather than depending on passwords.

And theoretically passkeys aren’t a bad idea, most of us know the problems with passwords when they’re forgotten or stolen, used for account phishing — all sorts of problems. And I myself have called for moving away from passwords. But as we say so often, the devil is in the details, and I’m not happy with Google’s passkey implementation as it stands right now. Google is aggressively pushing their users currently, asking if they want to move to a passwordless experience. And I’m choosing not to accept that option right now, and while the choice is certainly up to each individual, I myself don’t recommend using it at this stage.

Without getting too technical, one of my concerns is that anyone who can authenticate a device that has Google passkeys enabled on it, will have full access to those Google accounts without having to have any additional information — not even an additional authentication step. And this means that if — as is incredibly common — someone with a weak PIN for example on their smartphone, loses that device or it’s stolen, again, happens all the time, and the PIN was eavesdropped or guessed, those passkeys could let a culprit have full access to the associated Google accounts and lock out the rightful owner from those accounts before they had a chance to take any actions to prevent it.

And I’ve been discussing my concerns about this with Google, and their view — to use my words — is that they consider this to be the greatest good for the greatest number of people — for whom it will be a security enhancement. The problem is that Google has a long history of mainly being concerned about the majority, and leaving behind vast numbers of users who may represent a small percentage but still number in the millions or more. And these often are the same people who through no fault of their own get locked out of their Google accounts, lose access to their email on Gmail, photos, other data, and frankly Google’s account recovery systems and lack of useful customer service in these regards have long been a serious problem.

So I really don’t want to see the same often nontechnical folks who may have had problems with Google accounts before, to be potentially subjected to a NEW way to lose access to their accounts. Again it’s absolutely an individual decision, but for now I’m going to skip using Google passkeys and that’s my current personal recommendation.

–Lauren–

Google continues to push ahead with its ill-advised scheme to force passkeys on users who do not understand their risks, and will try push all users into this flawed system starting imminently.

In my discussions with Google on this matter (I have chatted multiple times with the Googler in charge of this), they have admitted that their implementation, by depending completely on device authentication security which for many users is extremely weak, will put many users at risk of their Google accounts being compromised. However, they feel that overall this will be an improvement for users who have strong authentication on their devices.

And as for ordinary people who already are left behind by Google when something goes wrong? They’ll get the shaft again. Google has ALWAYS operated on this basis — if you don’t fit into their majority silos, they just don’t care. Another way for Google users to get locked out of their accounts and lose all their data, with no useful help from Google.

With Google’s deficient passkey system implementation — they refuse to consider an additional authentication layer for protection — anyone who has authenticated access to your device (that includes the creep that watched you access your phone in that bar before he stole it) will have full and unrestricted access to your Google passkeys and accounts on the same basis. And when you’re locked out, don’t complain to Google, because they’ll just say that you’re not the user that they’re interested in — if they respond to you at all, that is.

“Thank you for choosing Google.”

–Lauren–

In the 2005 film “V for Vendetta” a fictional UK government has turned into a tightly censored, tracked, and controlled hellscape, with technology used to control citizens in every way possible. The UK has now taken a massive step toward making that horror a reality, with the passage of likely the most misguided legislation in the country since the Norman invasion of 1066.

I won’t detail their Online Safety Bill here — you can find endless references by searching yourself — but the vast, blurry, nebulous, misguided rules for “protecting children from ‘harmful’ content” — a slippery slope bad enough on its own, quickly expanded into a Chinese Internet style virtual steel collar for every UK resident, chained to the government in every aspect of their online lives.

The mandated social media platform ID age verification requirements, which will ultimately require the showing of government IDs for access to sites, alone will create the opportunity for virtually every action of every user of the Internet in the UK to be tracked by the government and its minions in ever expanding ways over time.

Be careful what sites you visit or what you ask or say on them. In China, you can simply vanish under such circumstances. And in the UK? Similar disappearances coming soon, perhaps, as every site you visit, no matter the topic related to business, medical concerns, or other aspects of your family’s private and personal life, will ultimately be linked to you in government databases.

VERY similar *bipartisan* legislative efforts are taking place here in the U.S., though the U.S. court system is creating additional hurdles for their perpetrators here, at least for the moment. For now.

While some activists and legislators spend their time ranting about Internet advertising, governments around the world are working to turn the Internet into a pervasive tool for tracking your every online move and thought, permanently linked to your government IDs.

We’ve seen it in Communist China. Now we see it in so-called democracies.

Open your eyes — while you still can. 

–Lauren–

 
News Feeds

Environment
Blog | Carbon Commentary
Carbon Brief
Cassandra's legacy
CleanTechnica
Climate | East Anglia Bylines
Climate and Economy
Climate Change - Medium
Climate Denial Crock of the Week
Collapse 2050
Collapse of Civilization
Collapse of Industrial Civilization
connEVted
DeSmogBlog
Do the Math
Environment + Energy – The Conversation
Environment news, comment and analysis from the Guardian | theguardian.com
George Monbiot | The Guardian
HotWhopper
how to save the world
kevinanderson.info
Latest Items from TreeHugger
Nature Bats Last
Our Finite World
Peak Energy & Resources, Climate Change, and the Preservation of Knowledge
Ration The Future
resilience
The Archdruid Report
The Breakthrough Institute Full Site RSS
THE CLUB OF ROME (www.clubofrome.org)
Watching the World Go Bye

Health
Coronavirus (COVID-19) – UK Health Security Agency
Health & wellbeing | The Guardian
Seeing The Forest for the Trees: Covid Weekly Update

Motorcycles & Bicycles
Bicycle Design
Bike EXIF
Crash.Net British Superbikes Newsfeed
Crash.Net MotoGP Newsfeed
Crash.Net World Superbikes Newsfeed
Cycle EXIF Update
Electric Race News
electricmotorcycles.news
MotoMatters
Planet Japan Blog
Race19
Roadracingworld.com
rohorn
The Bus Stops Here: A Safer Oxford Street for Everyone
WORLDSBK.COM | NEWS

Music
A Strangely Isolated Place
An Idiot's Guide to Dreaming
Blackdown
blissblog
Caught by the River
Drowned In Sound // Feed
Dummy Magazine
Energy Flash
Features and Columns - Pitchfork
GORILLA VS. BEAR
hawgblawg
Headphone Commute
History is made at night
Include Me Out
INVERTED AUDIO
leaving earth
Music For Beings
Musings of a socialist Japanologist
OOUKFunkyOO
PANTHEON
RETROMANIA
ReynoldsRetro
Rouge's Foam
self-titled
Soundspace
THE FANTASTIC HOPE
The Quietus | All Articles
The Wire: News
Uploads by OOUKFunkyOO

News
Engadget RSS Feed
Slashdot
Techdirt.
The Canary
The Intercept
The Next Web
The Register

Weblogs
...and what will be left of them?
32767
A List Apart: The Full Feed
ART WHORE
As Easy As Riding A Bike
Bike Shed Motorcycle Club - Features
Bikini State
BlackPlayer
Boing Boing
booktwo.org
BruceS
Bylines Network Gazette
Charlie's Diary
Chocablog
Cocktails | The Guardian
Cool Tools
Craig Murray
CTC - the national cycling charity
diamond geezer
Doc Searls Weblog
East Anglia Bylines
faces on posters too many choices
Freedom to Tinker
How to Survive the Broligarchy
i b i k e l o n d o n
inessential.com
Innovation Cloud
Interconnected
Island of Terror
IT
Joi Ito's Web
Lauren Weinstein's Blog
Lighthouse
London Cycling Campaign
MAKE
Mondo 2000
mystic bourgeoisie
New Humanist Articles and Posts
No Moods, Ads or Cutesy Fucking Icons (Re-reloaded)
Overweening Generalist
Paleofuture
PUNCH
Putting the life back in science fiction
Radar
RAWIllumination.net
renstravelmusings
Rudy's Blog
Scarfolk Council
Scripting News
Smart Mobs
Spelling Mistakes Cost Lives
Spitalfields Life
Stories by Bruce Sterling on Medium
TechCrunch
Terence Eden's Blog
The Early Days of a Better Nation
the hauntological society
The Long Now Blog
The New Aesthetic
The Public Domain Review
The Spirits
Two-Bit History
up close and personal
wilsonbrothers.co.uk
Wolf in Living Room
xkcd.com