In my very recent post:
“Internet Users’ Safety in a Post-Roe World”
I expressed concerns regarding how Internet and telecommunications firms would protect women’s and others’ data in a post-Roe v. Wade world of anti-abortion states’ health data demands.
Google has now briefly blogged about this, at:
“Protecting people’s privacy on health topics”
The most notable part of the Google post is the announcement of this important change:
“Location History is a Google account setting that is off by default, and for those that turn it on, we provide simple controls like auto-delete so users can easily delete parts, or all, of their data at any time. Some of the places people visit — including medical facilities like counseling centers, domestic violence shelters, abortion clinics, fertility centers, addiction treatment facilities, weight loss clinics, cosmetic surgery clinics, and others — can be particularly personal. Today, we’re announcing that if our systems identify that someone has visited one of these places, we will delete these entries from Location History soon after they visit. This change will take effect in the coming weeks.”
I definitely endorse this change, which aligns with the suggestions in my above referenced blog post regarding handling of sensitive location data. Thank you Google for taking this crucial action. This is an excellent start.
However, not yet publicly addressed by Google are the issues I noted regarding how these sensitive topics in search histories (both as stored by Google itself and/or on browsers) could also be abused by anti-abortion states hell-bent on pursuing women and others as part of those states’ extremist agendas, including in many instances abortion bans without exceptions for rape and incest.
Again, I praise Google for their initial step regarding location data, but there’s much more work still to do!
–Lauren–
Greetings. I write the following with no joy whatsoever.
I have reluctantly come to the conclusion that it may be necessary to legislate that any social media user who wishes to have their posts seen by more than a small handful of users will need to be authenticated by any (significantly-sized) sites, using government IDs.
This identification information would be retained by the firms so long as the users are active and for some specified period afterwards. Users would *not* be required to use their real names for posts, but the linkages to their actual IDs would be available to authorities in cases of abuse under appropriate, precisely defined circumstances, subject to court oversight.
This would include situations where a post may be forwarded to larger audiences by others, which will be a technical challenge to implement.
The ability to reach large audiences on today’s Internet should be a privilege, no longer a right.
It is very sad that it has come to this.
–Lauren–
UPDATE (1 July 2022): My Thoughts About Google's New Blog Post Regarding Health-Related Data Privacy
UPDATE (24 June 2022): As expected, the U.S. Supreme Court today overturned Roe v. Wade, bringing the issues discussed below into immediate focus.
TL;DR: By no later than early this July, it is highly probable that a nearly half-century nationwide precedent providing women with abortion-related protections will be partly or completely reversed by the current U.S. Supreme Court (SCOTUS). This sea change, especially impacting women’s rights but with even broader implications now and into the future, would immediately and dramatically affect many policy and operational aspects of numerous important Internet firms. Unless effective planning for this situation takes place imminently, the safety of women, the well-being of Internet users more generally, and crucial services of these firms themselves will in all likelihood be at risk in critical respects.
– – – – – –
Since the recent leak of a SCOTUS draft decision that would effectively eliminate the national protections of Roe v. Wade, and subsequent remarks by some of the associated justices, it is now widely assumed that within a matter of days or weeks a partial or total reversal of Roe will revert the vast majority of abortion-related matters back to the individual states.
Many politicians and states have already indicated their plans to immediately ban most or even all abortions, including in some cases those related to rape and incest, and even those to preserve the health of the woman, with only narrow exceptions even to save mothers’ lives. Some of these laws may effectively criminalize miscarriages. Some may introduce both civil and criminal penalties related to abortion, possibly bringing homicide or murder charges against involved parties, potentially including the pregnant women.
Various states plan to try extending their bans and civil/criminal penalties to include anyone who “participates” in making abortions possible, even if they are in other states, as when a woman travels to a different state for an abortion (the legality of one state attempting to impact actions in another state in this manner is unclear, but with today’s SCOTUS no possibilities can be safely ignored). Actions by some states to try ban obtaining, ordering, or providing various abortion drugs are also already being enacted. Note that SCOTUS has to date permitted to continue the Texas mechanism for suing abortion providers, which has largely blocked abortions in that state.
“Trigger laws” already in place in some states along with the statements of state legislators indicate that near total or total abortion bans will immediately become law in various states if the anticipated SCOTUS decision is announced.
Anti-abortion and affiliated factions are already planning — using the reasoning of the expected SCOTUS decision as a foundation — for follow-up actions pushing for national abortion bans, limits on contraception, banning gay marriage, rolling back LGBTQ+ rights, and related activities. U.S. Senate Republican Leader Mitch McConnell has recently proclaimed that a nationwide abortion ban is possible if the GOP retakes the House, Senate, and presidency.
These events are creating what could become an existential threat to many Internet users and to key aspects of many Internet firms’ policy and operational models.
Given the sweeping and unprecedented scope of the oppressive laws that would be unleashed on pregnant women and anyone else who becomes involved with their healthcare, especially given the civil and even criminal penalties being written into these laws, it seems inevitable that demands for access to data in the possession of many Internet and telecommunications firms relating to user activities will drastically increase.
Search histories (both server and browser) and potentially even stored email data could be sought looking for queries about abortion services, abortion drugs, and numerous other related topics. Location data (both targeting specific users, and data from broader geofence warrants associated with, for example, abortion providers) could be demanded. A range of other resulting data demands are also highly probable. It is also expected that there would be even more calls for government-mandated backdoors into end-to-end encrypted messaging systems.
Women may put their health and lives at risk by not seeking necessary health services, for fear of these abortion laws. Women’s partners, other family members, friends, associates, and healthcare providers may reasonably believe that their livelihoods or freedom may compromised if they are found to be providing or aiding in any manner related to abortion services.
Many users may cease using Internet and various telecommunications services in the manners that they previously would have, out of concerns that their related activities and other data could ultimately fall into the hands of state or other officials, and then be used to track and potentially prosecute them under these abortion-related laws.
This situation is a Trust & Safety emergency of the first order for all of these firms.
While some firms already provide users a range of search/location history control tools, I would assert that most users do not understand them and are frequently unaware of how they are actually configured.
I believe that the best mechanism at this time to help protect women and affiliated others who would be victimized by these state actions is to not save the associated data in the first place, unless a user decides that they desire to have that data saved.
One possibility would be for these firms to proactively offer users the option to not save (or alternatively, very quickly expunge) their search, location, and other user activity data associated with abortion and important related issues — both on company servers, and within browser histories if practicable. Users who wished to have any of these categories of data activity saved as before could choose not to exercise this option.
Unfortunately, a database of users who opt out of having this data saved may itself be an attractive data demand target by parties who may assume that it mainly represents individuals attempting to hide activities related to abortions. This possibility may argue for the preferred default behavior being to not save this data, and offering users the option of saving it if they so choose.
While these changes could be part of a desirable broader effort to give users more control over which specific aspects of their “personally sensitive” activity data are saved, this would of course be a significantly larger project, and time is of the essence given the imminent SCOTUS ruling.
Obviously I am not here addressing the detailed legal considerations or potential technical implementation challenges of the proposals above, and there may exist other ways to quickly ameliorate the risks that I’ve described, though practical alternatives are not obvious to me at present.
However, I do feel strongly that the status quo regarding user activity data in a post-Roe environment could create a nightmarish situation for many women and other Internet users, and be extraordinarily challenging for firms from Trust & Safety and broader policy and operational aspects.
I strongly recommend that actions be taken immediately to protect Internet users from the storm that will likely arrive very shortly indeed.
–Lauren–
It seems like only a few years ago, the entire world was enamored of Big Tech and the Internet — and pretty much everyone was trying to emulate their most successful players. But now, to watch the news reports and listen to the politicians, the Internet and Big Tech are Our Enemies, responsible for everything from mass shootings to drug addiction, from depression to child abuse, and seemingly most other ills that any particular onlooker finds of concern in our modern world.
The truth is much more complex, and much more difficult to comfortably accept. For the fundamental problems we now face are not the fault of technology in any form, they are fully the responsibility of human beings. That is, as Pogo famously said, “We have met the enemy, and he is us.”
What’s more, most users of social media and other Internet services don’t realize how much they have to lose as a result of the often politically motivated faux “solutions” being proposed (and in some cases already passed into law) that could literally cripple many of the sites that billions of us have come to depend upon in our daily lives.
Hate speech, for example, was not invented by the Internet. While it can certainly be argued that social media increased its distribution, the intractable nature of the problem is clearly demonstrated by calls from the Right to leave most hate speech available as legal speech (at least in the U.S. — other countries have different legal standards regarding speech), while the Left (and many other countries) want hate speech removed even more rapidly. Both sides propose draconian penalties for failures to comply with their completely opposite demands.
In the U.S., some states have already passed laws explicitly prohibiting Big Tech from removing wide ranges of speech, much of which would be considered hateful and/or outright disinformation. These laws are currently unenforced due to court actions, but not on a permanent basis at this time.
The utter chaos that would be triggered by enforcement of such laws and associated attempts to undermine crucial Communications Decency Act Section 230 are obvious. If firms are required by law not to remove speech that they consider to be dangerous misinformation or hate speech, they will almost certainly find themselves cut off from key service providers that they need to stay in operation, who won’t want to keep doing business with them. Perhaps laws would then be passed to try require that those providers not cut off social media firms in such cases. But what of advertisers who do not wish to be associated with vile content? Laws to force them to continue advertising on particular sites are unlikely in the extreme.
Similar dilemmas apply to most other areas of Big Tech and the Internet that are now the subject of seemingly endless condemnation. There are calls for end-to-end encryption of chat systems and other direct messaging to protect private conversations from outside surveillance and tampering — but there are simultaneously demands that governments be able to see into these conversations to try detect child abuse or possible mass shooter events before they occur. Another enormous category of conflicting demands will arise as the U.S. Supreme Court drastically scales back fundamental protections for women.
Even if encryption were banned (a ban that we know would never be anywhere near 100% effective), the sheer scale of the Internet in general, and of social media in particular, are such that no currently imaginable combination of human beings and artificial intelligence could usefully scan and differentiate false positives from genuine threats among the nearly inconceivably enormous volumes of data involved. False positives have real costs — they divert scarce resources from genuine threats where those resources are desperately needed.
Big Tech now finds itself firmly between the proverbial rock and the hard place. Governments, politicians, and others are demanding changes that in many cases aren’t only in 180 degree opposition (“Take down violating posts faster! No, leave them up — taking them down is censorship!”), but are also calling for technologically impractical approaches to monitoring social media (both public postings and private messages/chats) at scale. Many of these demands would lead inevitably to requiring virtually all social media posts to be pre-moderated and pre-approved before being permitted to be seen publicly. Every public post. Every private chat. Every live stream throughout the totality of its existence.
Only in such or similar ways could social media firms meet the demands being strewn upon it, even if the inherent conflicts in demands from different groups and political factions could somehow be harmonized, even leaving aside associated privacy concerns.
But this is actually entirely academic at the kinds of scales at which users currently post to social media. Such pre-moderation is not possible in any kind of effective way without drastically reducing the total volume of user content that is made available.
This would leave Big Tech with only one likely practical path forward. Firms would need to drastically and dramatically reduce the amount of UGC (User Generated Content) that is submitted and publicly posted. All manner of postings — written, video, audio, prerecorded content and live streams, virtually everything that any user might want other users to see, would need to be curtailed. A tiny percentage compared with what is seen today might continue to be publicly surfaced after the required pre-moderation, but this would be a desert ghost town compared to today’s social media landscape.
There are some observers who upon reading this might think to themselves, “So what? To hell with social media! The Internet and the world will be better without it.” But this is fundamentally wrong. The ability of ordinary people to communicate with many others — without having to channel through traditional mass media gatekeepers — has been one of the most essential liberating aspects of the Internet. The appropriate responses to the abusive ways that some persons have chosen to use these capabilities do not include permitting governments to decimate a crucial aspect of the Internet’s empowerment of individuals.
Ultimately might governments expand their monitoring edicts to include email? Will attempts to ban VPNs become mainstream around the planet? There’s no reason to assume that governments demanding mass data surveillance would ultimately hesitate in any of these respects.
Of course, if this is what voters really want, it’s what their politicians will likely provide them. Possible alternatives that might help to limit some abuses — one suggestion at least worth discussing is requiring social media firms to confirm the identities of users posting to large groups before such postings are visible — may not be seriously considered. We shall see.
Unfortunately, most users of the Internet and social media are ill-informed about the realities of these situations. Most of what they are seeing on these topics is political rhetoric devoid of crucial technological contexts. They are purposely kept uninformed regarding the ramifications of the false “remedies” that some politicians and haters of Big Tech are spewing forth daily.
We are on the cusp of having major parts of our daily lives seriously disrupted by political demands that would wither away many of the services on the very sites that are so important to us all.
–Lauren–
Within hours of the recent horrific mass shooting in New Zealand, know-nothing commentators and pandering politicians were already on the job, blaming Facebook, Google’s YouTube, and other large social media platforms for the spread of the live attack video and the shooter’s ranting and sickening written manifesto.
While there was widespread agreement that such materials should be redistributed as little as possible (except by Trump adviser Kellyanne Conway, who has bizarrely recommended everyone read the latter, thus playing into the shooter’s hands!), the political focus quickly concentrated on blaming Facebook and YouTube for the sharing of the video, in its live form and in later recorded formats.
Let’s be very clear about this. While it can be argued that the very large platforms such as YouTube and Facebook were initially slow to fully recognize the extent to which the purveyors of hate speech and lying propaganda were leveraging their platforms, they have of late taken major steps to deal with these problems, especially in the wake of breaking news like the NZ shooting, including taking various specific actions regarding takedowns, video suggestions, and other related issues as recommended publicly by various observers including myself.
Of course this does not mean that such steps can be 100% effective at very large scales. No matter how many copies of such materials these firms successfully block, the ignorant refrains of “They should be able to stop them all!” continue.
In fact, even with significant resources to work with, this is an extremely difficult technical problem. Videos can be surfaced and altered in a myriad number of ways to try bypass automated scanning systems, and while advanced AI techniques combined with human assets will continually improve these detection systems, absolute perfection is not likely in the cards for the foreseeable future, or more likely ever.
Meanwhile, other demands being bandied about are equally specious.
Calls to include significant time delays in live streams ignore the fact that these would destroy educational live streams and other legitimate programming of all sorts where creators are interacting in real time with their viewers, via chat or other means. Legitimate live news streams of events critical to the public interest could be decimated.
Demands that all uploaded videos be fully reviewed by humans before becoming publicly available are equally utterly impractical. Even with unlimited resources you couldn’t hire enough people to completely preview the enormous numbers of videos being uploaded every minute. Not only would full previews be required — since a prohibited clip could be spliced into permitted footage — there would still be misidentifications.
Even if you limited such extensive preview procedures to “new” users of the platforms, there’s nothing to stop determined evil from “playing nice” long enough for restrictions to be lifted, and then orchestrating their attacks.
Again, machine learning in concert with human oversight will continue to improve the systems used by the major platforms to deal with this set of serious issues.
But frankly, those major platforms — who are putting enormous resources into these efforts and trying to remove as much hate speech and associated violent content as possible — are not the real problem.
Don’t be fooled by the politicians and “deep pockets”-seeking regulators who claim that through legislation and massive fines they can fix all this.
In fact, many of these are the same entities who would impose global Internet censorship to further their own ends. Others are the same right-wing politicians who have falsely accused Google of political bias due to Google’s efforts to remove from their systems the worst kinds of hate speech (of which much more spews forth from the right than the left).
The real question is: Where is all of this horrific hate speech originating in the first place? Who is creating these materials? Who is uploading and re-uploading them?
The problem isn’t the mainstream sites working to limit these horrors. By and large it’s the smaller sites and their supportive ISPs and domain registrars who make no serious efforts to limit these monstrous materials at all. In some cases these are sites that give the Nazis and their ilk a nod and a wink and proclaim “free speech for all!” — often arguing that unless the government steps in, they won’t take any steps of their own to control the cancer that metastasizes on their sites.
They know that at least in the U.S., the First Amendment protects most of this speech from government actions. And it’s on these kinds of sites that the violent racists, antisemites, and other hateful horrors congregate, encouraged by the tacit approval of a racist, white nationalist president.
You may have heard the phrase “free speech but not free reach.” What this means is that in the U.S. you have a right to speak freely, even hatefully, so long as specific laws are not broken in the process — but this does not mean that non-governmental firms, organizations, or individuals are required to help you amplify your hate by permitting you the “reach” of their platforms and venues.
The major firms like Google, Facebook, and others who are making serious efforts to solve these problems and limit the spread of hate speech are our allies in this war. Our enemies are the firms that either blatantly or slyly encourage, support, or tolerate the purveyors of hate speech and the violence that so often results from such speech.
The battle lines are drawn.
–Lauren–
UPDATE (February 28, 2019): More updates on our actions related to the safety of minors on YouTube
– – –
For vast numbers of persons around the globe, YouTube represents one of the three foundational “must have” aspects of a core Google services triad, with the other two being Google Search and Gmail. There are many other Google services of course, but these three are central to most of our lives, and I’d bet that for many users of these services the loss of YouTube would be felt even more deeply than the loss of either or both of the other two!
The assertion that a video service would mean so much to so many people might seem odd in some respects, but on reflection it’s notable that YouTube very much represents the Internet — and our lives — in a kind of microcosm.
YouTube is search, it’s entertainment, it’s education. YouTube is emotion, nostalgia, and music. YouTube is news, and community, and … well the list is almost literally endless.
And the operations of YouTube encompass a long list of complicated and controversial issues also affecting the rest of the Internet — decisions regarding content, copyright, fair use, monetization and ads, access and appeals, and … yet another very long list.
YouTube’s scope in terms of numbers of videos and amounts of Internet traffic is vast beyond the imagination of any mere mortal beings, with the exception of Googlers like the YouTube SREs themselves who keep the wheels spinning for the entire massive mechanism.
In the process of growing from a single short video about elephants at the zoo (more about that 2005 video in a moment) into a service that I personally can’t imagine living without, YouTube has increasingly intersected with the entire array of human social issues, from the most beatific, wondrous, and sublime — to the most crass, horrific, and evil.
I’ve discussed all of these aspects of YouTube — and my both positive and negative critiques regarding how Google has dealt with them over time — in numerous past posts over the years. I won’t even bother listing them here — they’re easy to find via search.
I will note again though that — especially of late — Google has become very serious about dealing with inappropriate content on YouTube, including taking some steps that I and others have long been calling for, such as removal of dangerous “prank and dare” videos, demonetization and general form de-recommendation of false “conspiracy” videos, and just announced, demonetization and other utterly appropriate actions against dangerous “anti-vaccine” (aka “anti-vaxx”) videos.
This must be an even more intense time than usual for the YouTube policy folks up in San Bruno at YouTube HQ — because over the last few days yet another massive controversy regarding YouTube has erupted, this time one that has been bubbling under the surface for a long time, and suddenly burst forth dramatically and rather confusingly as well, involving the “hijacking” of innocent YouTube videos’ comments by pedophiles.
YouTube comments are a fascinating example of often stark contrasts in action. Many YouTube viewers just watch the videos and ignore comments completely. Other viewers consider the comments to be at least as important as the videos themselves. Many YouTube uploaders — I’ll refer to them as creators going forward in this post — are effectively oblivious to comments even on their own videos — which, given that the default setting for YouTube videos is to permit comments without any moderation — has become an increasingly problematic issue.
My own policy (started as soon as the functionality to do so became available) has always been to set my own YouTube videos to “moderated” mode — I must approve individual comments before they can appear publicly. But that takes considerable work, even with relatively low viewership videos like mine. Most YouTube creators likely never change the default comments setting, so comments of all sorts can appear and accumulate largely unnoticed by most creators.
In fact, a few minutes ago when I took another look at that first YouTube video (“Me at the zoo”) to make sure that I had the date correct, I noticed that it now has (as I type this) about 1.64 million comments. Every 5 or 10 seconds a new comment pops up on there, virtually all of them either requests for viewers to subscribe to other YouTube channels, or various kinds of more traditional spams and scams.
Obviously, nobody is curating the comments on this historic video. And this is the same kind of situation that has led to the new controversy about pedophiles establishing a virtual “comments network” of innocent videos involving children. It’s safe to assume that the creators of those videos haven’t been paying attention to the evil comments accumulating on those videos, or might not even know how to remove or otherwise control them.
There have already been a bunch of rather wild claims made about this situation. Some have argued that YouTube’s suggestion engine is at fault for suggesting more similar videos that have then in turn had their own comments subverted. I disagree. The suggestion algorithm is merely recommending more innocent videos of the same type. These videos are not themselves at fault, the commenters are the problem. In fact, if YouTube videos didn’t have comments at all, evil persons could simply create comments on other (non-Google) sites that provided links to specific YouTube videos.
It’s easy for some to suggest simply banning or massively restricting the use of comments on YouTube videos as a “quick fix” for this dilemma. But that would drastically curtail the usefulness of many righteous videos.
I’ve seen YouTube entertainment videos with fascinating comment threads from persons who worked on historic movies and television programs or were related to such persons. For “how-to” videos on YouTube — one of the most important and valuable categories of videos as far as I’m concerned — the comment threads often add enormous value to the videos themselves, as viewers interact about the videos and describe their own related ideas and experiences. The same can be said for many other categories of YouTube videos as well — comments can be part and parcel of what makes YouTube wonderful.
To deal with the current, highly publicized crisis involving comment abuse — which has seen some major advertisers pulling their ads from YouTube as a result — Google has been disabling comments on large numbers of videos, and is warning that if comments are turned back on by these video creators and comment abuse occurs again, demonetization and perhaps other actions against those videos may occur.
The result is an enormously complex situation, given that in this context we are talking almost entirely about innocent videos where the creators are themselves the victims of comment abuse, not the perpetrators of abuse.
While I’d anticipate that Google is working on methods to algorithmically better filter comments at scale to try help avoid these comment abuses going forward, this still likely creates a situation where comment abuse could in many cases be “weaponized” to target innocent individual YouTube creators and videos, to try trigger YouTube enforcement actions against those innocent parties.
This could easily create a terrible kind of Hobson’s choice. For safety’s sake, these innocent creators may be forced to disable comments completely, in the process eliminating much of the value of their videos to their viewers. On the other hand, many creators of high viewership videos simply don’t have the time or other resources to individually moderate every comment before it appears.
A significant restructuring of the YouTube comments ecosystem may be in order, to permit the valuable aspects of comments to continue on legitimate videos, while still reducing the probabilities of comment abuse as much as possible.
Perhaps it might be necessary to consider the permanent changing of the default comments settings away from “allowed” — to either “not allowed” or “moderated” — for new uploads (at least for certain categories of videos), especially for new YouTube creators. But given that so many creators never change the defaults, the ultimate ramifications and possible unintended negative consequences of such a significant policy alteration appear difficult to predict.
Improved tools to aid creators in moderating comments on high viewership videos would also seem to be in focus — perhaps by leveraging third-party services or trusted viewer communities.
There are a variety of other possible approaches as well.
It appears certain that both YouTube itself and YouTube creators have reached a critical crossroads, a junction that successfully navigating will likely require some significant changes going forward, if the greatness of YouTube and its vast positive possibilities for creators are to be maintained or grow.
–Lauren–
A few weeks ago, I noted the very welcome news that Google’s YouTube is cracking down on the presence of dangerous prank and dare videos, rightly categorizing them as potentially harmful content no longer permitted on the platform. Excellent.
Even more recently, YouTube announced a new policy regarding the category of misleading and clearly false “conspiracy theory” videos that would sometimes appear as suggested videos.
Quite a few folks have asked me how I feel about this newer policy, which aims to prevent this category of videos from being suggested by YouTube’s algorithms, unless a viewer is already subscribed to the YouTube channels that uploaded the videos in question.
The policy will take time to implement given the significant number of videos involved and the complexities of classification, but I feel that overall this new policy regarding these videos is an excellent compromise.
If you’re a subscriber to a conspiracy video hosting channel, conspiracy videos from that channel would still be suggested to you.
Otherwise, if you don’t subscribe to such channels, you could still find these kinds of videos if you purposely search for them — they’re not being removed from YouTube.
A balanced approach to a difficult problem. Great work!
–Lauren–
It’s getting increasingly difficult to keep up with Google’s User Trust Failures these days, as they continue to rapidly shed “inconvenient” users faster than a long-haired dog. I do plan a “YouTube Live Chat” to discuss these issues and other Google-related topics, tentatively scheduled for Tuesday, February 12 at 10:30 AM PST. The easiest way to get notifications about this would probably be to subscribe to my main YouTube channel at: https://www.youtube.com/vortextech (be sure to click on the “bell” after subscribing if you want real time notifications). I rarely promote the channel but it’s been around for ages. Don’t expect anything fancy.
In the meantime, let’s look at Google’s latest abominable treatment of users, and this time it’s users who have actually been paying them with real money!
As you probably know, I’ve recently been discussing Google’s massive failures involving the shutdown of Google+ (“Google Users Panic Over Google+ Deletion Emails: Here's What's Actually Happening” – https://lauren.vortex.com/2019/02/04/google-users-panic-over-google-deletion-emails-heres-whats-actually-happening).
Google has been mistreating loyal Google users — among the most loyal that they have and who often are decision makers about Google commercial products — in the process of the G+ shutdown on very short notice.
One might think that Google wouldn’t treat their paying customers as badly — but hey, you’d be wrong.
Remember when Google Fiber was a “thing” — when cities actually competed to be on the Google Fiber deployment list? It’s well known that incumbent ISPs fought against Google on this tooth and nail, but there was always a suspicion that Google wasn’t really in this for the long haul, that it was really more of an experiment and an effort to try jump start other firms to deploy fiber-based Internet and TV systems.
Given that the project has been downsizing for some time now, Google’s announcement today that they’re pulling the plug on the Louisville Google Fiber system doesn’t come as a complete surprise.
But what’s so awful about their announcement is the timing, which shows Google’s utter contempt for their Louisville fiber subscribers, on a system that only got going around two years ago.
Just a relatively short time ago, in August 2018, Google was pledging to spend the next two years dealing with the fiber installation mess that was occurring in their Louisville deployment areas (“Google Fiber announces plan to fix exposed fiber lines in the Highlands” – https://www.wdrb.com/news/google-fiber-announces-plan-to-fix-exposed-fiber-lines-in/article_fbc678c3-66ef-5d5b-860c-2156bc2f0f0c.html).
But now that’s all off. Google is giving their Louisville subscribers notice that they have only just over two months before their service ends. Go find ye another ISP in a hurry, oh suckers who trusted us!
Google will provide those two remaining months’ service for free, but that’s hardly much consolation for their subscribers who now have to go through all the hassles of setting up alternate services with incumbent carriers who are laughing their way to the bank.
Imagine if one of those incumbent ISPs like a major telephone or cable company tried a shutdown stunt like this with notice of only a couple of months? They’d be rightly raked over the coals by regulators and politicians.
Google claims that this abrupt shutdown of the Louisville system will have no impact on other cities where Google Fiber is in operation. Perhaps so — for now. But as soon as Google finds those other cities “inconvenient” to serve any longer, Google will most likely trot out the guillotines to subscribers in those cities in a similar manner. C’mon, after treating Louisville this way, why should Fiber subscribers in other cities trust Google when it comes to their own Google-provided services?
Ever more frequently now, this seems to be The New Google’s game plan. Treat users — even paying users — like guinea pigs. If they become inconvenient to care for, give them a couple of months notice and then unceremoniously flush them down the toilet. Thank you for choosing Google!
Google is day by day becoming unrecognizable to those of us who have long felt it to be a great company that cared about more than just the bottom line.
Googlers — the rank and file Google employees and ex-employees whom I know — are still great. Unfortunately, as I noted in “Google's Brain Drain Should Alarm Us All” (https://lauren.vortex.com/2019/01/12/googles-brain-drain-should-alarm-us-all), some of their best people are leaving or have recently left, and it becomes ever more apparent that Google’s focus is changing in ways that are bad for consumer users and causing business users to question whether they can depend on Google to be a reliable partner going forward (“The Death of Google” – https://lauren.vortex.com/2018/10/08/the-death-of-google).
In the process of all this, Google is making itself ever more vulnerable to lying Google Haters — and to pandering politicians and governments — who hope to break up the firm and/or suck in an endless money stream of billions in fines from Google to prop up failing 20th century business models.
The fact that Google for the moment is still making money hand over fist may be partially blinding their upper management to the looming brick wall of government actions that could potentially stop Google dead in its tracks — to the detriment of pretty much everyone except the politicos themselves.
I remain a believer that suggested new Google internal roles such as ombudspersons, user advocates, ethics officers, and similar positions — all of which Google continues to fight against creating — could go a long way toward bringing balance back to the Google equation that is currently skewing ever more rapidly toward the dark side.
I continue — perhaps a bit foolishly — to believe that this is still possible. But I am decreasingly optimistic that it shall come to pass.
–Lauren–
Two days ago I posted “Google's Google+ Shutdown Emails Are Causing Mass Confusion” (https://lauren.vortex.com/2019/02/02/googles-google-shutdown-emails-are-causing-mass-confusion) — and the reactions I’m receiving make it very clear that the level of confusion and panic over this situation by vast numbers of Google users is even worse than I originally realized. My inbox is full of emails from worried users asking for help and clarifications that they can’t find or get from Google (surprise!) — and my Google+ (G+) threads on the topic are similarly overloaded with desperate comments. People are telling me that their friends and relatives have called them, asking what this all means.
Beyond the user trust abusive manner in which Google has been conducting the entire consumer Google+ shutdown process (even their basic “takeout” tool to download your own posts is reported to be unreliable for G+ downloads at this point), their notification emails, which I had long urged be sent to provide clarity to users, instead were worded in ways that have massively confused many users, enormous numbers of whom don’t even know what Google+ actually is. These users typically don’t understand the manners in which G+ is linked to other Google services. They understandably fear that their other Google services may be negatively affected by this mess.
Since Google isn’t offering meaningful clarification for panicked users — presumably taking its usual “this too shall pass” approach to user support problems — I’ll clarify this all as succinctly as I can — to the best of my knowledge — right here in this post.
UPDATE (February 5, 2019): Google has just announced that the Web notification panel primarily used to display G+ notifications will be terminated this coming March 7. This cuts another month off the useful life of G+, right when we’ll need notifications the most to coordinate with our followers for continuing contacts after G+. Without the notification panel, this will be vastly more difficult, since the alternative notifications page is very difficult to manage. No apologies. No nuthin’. First it was August. Then April. Now March. Can Google mistreat consumer users any worse? You can count on it!
Here’s an important bottom line: Core Google Services that you depend upon such as Gmail, Drive, Photos, YouTube, etc. will not be fundamentally affected by the G+ shutdown, but in some cases visible effects may occur due to the tight linkages that Google created between G+ and other services.
No, your data on Gmail or Drive won’t be deleted by the Google+ shutdown process. Your uploaded YouTube videos won’t be deleted by this.
However, outside of the total loss of user trust by loyal Google+ users, triggered by the kick in the teeth of the Google+ shutdown (without even provision of a tool to help with followers migration – “If Google Cared: The Tool That Could Save Google+ Relationships” (https://lauren.vortex.com/2019/02/01/if-google-cared-the-tool-that-could-save-google-relationships), there will be a variety of other Google services that will have various aspects “break” as a result of Google’s actions related to Google+.
To understand why, it’s important to understand that when Google+ was launched in 2011, it was positioned more as an “identity” product than a social media product per se. While it might have potentially competed with Facebook in some respects, creating a platform for “federated” identity across a wide variety of applications and sites was an important goal, and in the early days of Google+, battles ensued over such issues as whether users would continue to be required to use their ostensibly “real” names for G+ (aka, the “nymwars”).
Google acted to integrate this identity product — that is, Google+ — into many Google services and heavily promoted the use of G+ “profiles” and widgets (comments, +1 buttons, “follow” buttons, login functions, etc.) for third-party sites as well.
In some cases, Google required the creation of G+ profiles for key functions on other services, such as for creating comments on YouTube videos (a requirement that was later dropped as user reactions in both the G+ and YouTube communities where overwhelmingly negative).
Now that consumer G+ has become an “inconvenience” to Google, they’re ripping it out by the roots and attempting to completely eliminate any evidence of its existence, by totally removing all G+ posts, comments, and the array of G+ functions that they had intertwined with other services and third-party sites.
This means that anywhere that G+ comments have continued to be present (including Google services like “Blogger”), those comments will vanish. Users whom Google had encouraged at other sites and services to use G+ profile identities (rather than the underlying Google Account identities) will find those capabilities and profiles will disappear. Sites that embedded G+ widgets and functions will have those capabilities crushed, and their page formats in many cases disrupted as a result. Photos that were stored only in G+ and not backed up into the mainstream Google Photos product will reportedly be deleted along with all the G+ posts and comments.
And then on top of all this other Google-created mayhem related to their mishandling of the G+ shutdown, we have those panic-inducing emails going out to enormous numbers of Google users, most of whom don’t understand them. They can’t get Google to explain what the hell is going on, especially in a way that makes sense if you don’t understand what G+ was in the first place, even if somewhere along the line Google finessed you into creating a G+ account that you never actually used.
There’s an old saying — many of you may have first heard it stated by “Scotty” in an old original “Star Trek” episode: “Fool me once, shame on you — fool me twice, shame on me!”
In a nutshell, this explains why so many loyal users of great Google services — services that we depend on every day — are so upset by how Google has handled the fiasco of terminating consumer Google+. This applies whether or not these users were everyday, enthusiastic participants in G+ itself (as I’ve been since the first day of beta availability) — or even if they don’t have a clue of what Google+ is — or was.
Even given the upper management decision to kill off consumer Google+, the actual process of doing so could have been handled so much better — if there was genuine concern about all of the affected users. Frankly, it’s difficult to imagine realistic scenarios of how Google could have bungled this situation any worse.
And that’s very depressing, to say the least.
–Lauren–
UPDATE (February 4, 2019): Google Users Panic Over Google+ Deletion Emails: Here's What's Actually Happening
– – –
As I have long been urging, Google is finally sending out emails to Google+ account holders warning them of the impending user trust failure that is the Google+ shutdown. However — surprise! — the atrocious way that Google has worded the message is triggering mass confusion from users who don’t even consider themselves to have ever been G+ users, and are now concerned that other Google services such as Photos, Gmail, YouTube, etc. may be shutting down and associated data deleted (“Google Finally Speaks About the G+ Shutdown: Pretty Much Tells Users to Go to Hell” – https://lauren.vortex.com/2019/01/30/google-finally-speaks-about-the-g-shutdown-pretty-much-tells-users-to-go-to-hell).
The underlying problem is that many users have G+ accounts but don’t realize it, and apparently Google is sending essentially the same message to everyone who ever had a G+ account, active or not. Because Google has been aggressively urging the creation of G+ accounts (literally until a few days ago!) many users inadvertently or casually created them, and then forgot about them, sometimes years ago. Now they’re receiving confusing “shutdown” messages and are understandably going into a panic.
UPDATE (February 3, 2019): I’m now receiving reports of users (especially ones receiving the notification emails who don’t recall having G+ accounts) fearing that “all their Google data is going to be deleted” — and also reports of many users who are assuming that these alarming emails about data deletion are fakes, spam, phishing attempts, etc. I’m also receiving piles of messages containing angry variations on “What the hell was Google thinking when they wrote those emails?”
During the horrific period some years ago when Google was REQUIRING the creation of G+ accounts to comment on YouTube (a disaster that I rallied against both outside and inside the company at the time) vast numbers of comments and accounts became tightly intertwined between YouTube and G+, and the ultimate removal of that linkage requirement left enormous numbers of G+ accounts that had really only been created by users for YouTube commenting during that period.
So this new flood of confused and concerned users was completely predictable. If I had written the Google+ shutdown emails, I would have clearly covered these issues to help avoid upsetting Google users unnecessarily. But of course Google didn’t ask me to write the emails, so they followed their usual utilitarian approach toward users that they’re in the process of shedding — yet another user trust failure.
But this particular failure was completely preventable.
Be seeing you.
–Lauren–
UPDATE (February 4, 2019): Google Users Panic Over Google+ Deletion Emails: Here's What's Actually Happening
UPDATE (February 2, 2019): Google's Google+ Shutdown Emails Are Causing Mass Confusion
– – –
One of the questions I’m being frequently asked these days is specifically what could Google have done differently about their liquidation of Google+, given that a decision to do so was irrevocable. Much of this I’ve discussed in previous posts, including those linked within: “Google Finally Speaks About the G+ Shutdown: Pretty Much Tells Users to Go to Hell” (https://lauren.vortex.com/2019/01/30/google-finally-speaks-about-the-g-shutdown-pretty-much-tells-users-to-go-to-hell).
The G+ shutdown process is replete with ironies. The official Google account on G+ is telling users to follow Google on Google competitors like Facebook, Twitter, and Instagram. While there are finally some butter bar banners up warning of the shutdown — as I’ve long been calling for — warning emails haven’t yet apparently gone out to most ordinary active G+ users, but some users who had previously deleted their G+ accounts or G+ pages are reportedly receiving emails informing them that Google is no longer honoring their earlier promise to preserve photos uploaded to G+ — download them now or they’ll be crushed like bugs.
UPDATE (February 1, 2019): Emails with the same basic text as was included in the G+ help page announcement from January 30 regarding the shutdown (reference is at the “Go to Hell” link mentioned above), are FINALLY beginning to go out to current G+ account holders (and apparently, to some people who don’t even recall ever using G+).
Google is also recommending that you build blogs or use other social media to keep in touch with your G+ followers and friends after G+ shuts down, but has provided no mechanism to help users to do so. And this is a major factor in Google’s user trust failure when it comes to their handling of this entire situation.
G+ makes it intrinsically difficult to reach out to your followers to get contact information for moving forward. You never know which of your regular posts will actually be seen by any given following user, and even trying to do private “+name” messages within G+ often fails because G+ tends to sort similar profile names in inscrutable ways and in limited length lists, often preventing you from ever pulling up the user whom you really want to contact. This gets especially bad when you have a lot of followers, believe me — I’ve battled this many times trying to send a message to an individual follower, often giving up in despair.
I would assert — and I’m not wholly ignorant of how G+ works — that it would be relatively straightforward to offer users a tool that could be used to ask their followers (by follower circles, en masse, etc.) if they wished to stay in contact, and to provide those followers who were interested in doing so, the means to pass back to the original user a URL for a profile on a different social media platform, or an email address, or hell, even a phone number. Since this would be entirely voluntary, there would be no significant data privacy concerns.
Such a tool could be enormously beneficial to current G+ users, by providing them a simple means to help them stay in touch after G+’s demise in a couple of months. And if Google had announced such a tool, such a clear demonstration of concern about their existing users, rather than trying to wipe them off Google’s servers as quickly as possible and with a minimum of effort, this would have gone far toward proactively avoiding the many user trust concerns that have been triggered and exacerbated by Google’s current game plan for eliminating Google+.
That such a migration assistance tool doesn’t exist — which would have done so much good for so many loyal G+ users, among Google’s most fervent advocates until now — unfortunately speaks volumes about how Google really feels about us.
–Lauren–
UPDATE (February 4, 2019): Google Users Panic Over Google+ Deletion Emails: Here's What's Actually Happening
UPDATE (February 2, 2019): Google's Google+ Shutdown Emails Are Causing Mass Confusion
UPDATE (February 1, 2019): If Google Cared: The Tool That Could Save Google+ Relationships
– – –
For weeks now, I’ve been pounding on Google to get more explicit about their impending shutdown of consumer Google+. What they’ve finally written today on a G+ help page (https://support.google.com/plus/answer/9195133) demonstrates clearly how little that they care about G+ users who have spent years of their lives building up the service, appears to put a lie to key claimed excuses for ending consumer G+, and calls into question the degree to which any consumer or business users of Google should trust the firm’s dedication to any specific services going forward.
The originally announced shutdown date was for August. Then suddenly it was advanced to April (we now know from their new help page post that the official death date is 2 April 2019, though the process of completely deleting everyone from existence may take some months).
The key reasons for the shutdown originally stated by Google were API “security problems” that were obviously blown out of proportion — Google isn’t even mentioning those in their new announcements. Surprise, surprise:
“Given the challenges in creating and maintaining a successful Google+ that meets our consumer users' expectations, we decided to sunset the consumer version of Google+. We're committed to focusing on our enterprise efforts, and will be launching new features purpose-built for businesses.”
Translation: Hey, you’re not paying us anything, bug off!
And as I had anticipated, Google is doing NOTHING to help G+ users stay in touch with each other after the shutdown. In other words, it’s up to you to figure out some way to do it, boys and girls! Now go play on the freeway! Get lost! We just don’t care about you!
Since there’s nothing in Google’s new announcement that contradicts my analysis of this situation in my earlier related posts, I will herewith simply include for reference some of my recent posts related to this topic, for your possible perusal as you see fit.
I’ll note first my post announcing my own private forum that I’ve been forced to create — to try provide a safe home for many of my G+ friends who are being unceremoniously crushed by Google’s betrayal of their trust. Given my very limited resources, creating a new forum at this time was not in my plans, but Google’s shabby treatment of G+ users forced my hand. No matter what else happens in my life, I promise never to treat users of my forum with disrespect and contempt as Google has:
A New Invite-Only Forum for Victims of Google's Google+ Purge
https://lauren.vortex.com/2019/01/05/a-new-invite-only-forum-for-victims-of-googles-google-purge
And here are some of my related posts regarding the Google+ shutdown fiasco, its impacts on users, and related topics:
Google's G+ User Trust Betrayal Gets Worse and Worse
https://lauren.vortex.com/2019/01/29/googles-g-user-trust-betrayal-gets-worse-and-worse
An Important Message from "Google" about Google+
https://lauren.vortex.com/2019/01/22/an-important-message-from-google-about-google
Boot to the Head: When You Know that Google Just Doesn't Care Anymore
https://lauren.vortex.com/2019/01/14/boot-to-the-head-when-you-know-that-google-just-doesnt-care-anymore
Why Google Is Terrified of Its Users
https://lauren.vortex.com/2019/01/06/why-google-is-terrified-of-its-users
Why I No Longer Recommend Google for Many Serious Business Applications
https://lauren.vortex.com/2018/12/20/why-i-no-longer-recommend-google-for-many-serious-business-applications
Can We Trust Google?
https://lauren.vortex.com/2018/12/10/can-we-trust-google
The Death of Google
https://lauren.vortex.com/2018/10/08/the-death-of-google
As Google’s continuing decimation of user trust accelerates, you can count on me having more to say about these situations as we move forward. Take care everyone. Stay strong.
Be seeing you.
–Lauren–
When I recently posted a parody “Message from Google” regarding the upcoming shutdown of consumer Google+, I did not anticipate the wellspring of reactions from Google users, including those who were not specifically Google+ users.
An Important Message from "Google" about Google+ !
https://lauren.vortex.com/2019/01/22/an-important-message-from-google-about-google
(Google Docs Version: https://lauren.vortex.com/google-plus)
I had anticipated many folks saying that the posting was funny but in key respects depressingly true — which they did — but I did not expect my inbox to be flooded with consumer and business users telling me that they were abandoning Google services or not moving operations to Google, due to Google’s shabby treatment of so many users, and I did not realize that I was going to become the focal point for desperate, loyal G+ users asking me questions that Google has been refusing to answer.
In retrospect I shouldn’t have been surprised. To this day, Google has as far as I know not emailed ordinary G+ users about what’s going on, has no informational banners up about the impending shutdown, and (believe it or not!) is still soliciting for new users to join G+ and spend their time following other users and getting to know a service that Google is about to mercilessly destroy!
It’s remarkable. Unfathomable. Disgraceful.
And the questions. G+ users are sending me their questions:
What happens to all of the external web pages and posts that link to public G+ posts? Google taking down those G+ posts will break vast numbers of non-Google pages around the web.
What happens to sites that have deeply embedded G+ APIs for displaying “Plus” counts, follower boxes, G+ site login integrations, and more? What happens to Google Contacts data integrated from G+?
What is the ultimate fate of the actual G+ posts and related data? Do they all suddenly vanish from public view, from the control of their authors? Will they continue to be used internally by Google for ad system, machine learning, or for other purposes?
The list goes on and on.
Meanwhile, Google is hardly saying anything at all. It’s obvious that they’re treating consumer G+ — and all of its loyal users — as inconvenient pariahs, tossing us all into their dumpster as quickly and unceremoniously as possible.
My inbox is full of users both angry and sad, who loved Google but are now feeling like they’ve been pushed out of a car and directly into the path of steamrollers.
I’ve always tried to help with Google-related problems when I could. But I really don’t know what to say to these jilted users abandoned so callously by Google, because frankly I feel the same way about how Google is mistreating us, and Google has not been forthcoming with explanations, answers, or even believable excuses.
It’s obvious that Google just doesn’t care. And perhaps that’s the saddest part of all.
–Lauren–
UPDATE (March 16, 2019): The ads discussed below as appearing on the Roku YouTube app (even when subscribed to YouTube Premium) have now vanished for me — at least for the moment. I have no word as to whether this is a temporary or more long-term change, whether this was a test that has now terminated, or any other additional information. But I’m definitely glad to see those annoying boxes gone, especially the one that was overlaid on the playing videos themselves.
– – –
I pay for YouTube Premium because — among other things — I don’t want to see ads on videos.
But at least through the popular YouTube Roku app, YouTube is now continuously displaying BUY SEASON ads for some video program clips (complete with purchase price) in a blue box on the video control YouTube Roku app “watch pages” — and even worse, for a period of time (around 10 seconds) as a corner ad box overlay on the running videos themselves. The blue box ad is also present whenever you return to the watch page (e.g., by pausing the video), and the overlay ad appears for the same interval every time you begin running the video again. The overlay ad in particular is extremely annoying.
These ads are also present as a box on the regular web-based YouTube watch pages for these clips — where they are less obtrusive but still are ads on an ostensibly ad-free service.
YouTube Premium is promoted as a paid, ad-free service. The presence of these ads on Premium accounts (especially when overlaid on top of running videos — whether limited to Roku devices or ultimately deployed through other display devices as well) is not acceptable.
–Lauren–
(Google Doc version: https://lauren.vortex.com/google-plus)
Google – “You can count on us!”
An important announcement about Google+
Dear Google+ users,
We have some bad news for you. We hope you’re sitting down. If you’re driving, please pull over safely before reading the remainder of this message.
We know that many of you have built major parts of your lives around Google+, beginning back in 2011. Over the years since, we have encouraged you to share your experiences and photos, to build Communities and Collections. We know that large numbers of you have spent hours every day on G+, and have built up networks of friends with whom you communicate every day on G+.
And we know that in our rush to maximize G+ participation and engagement, we made some pretty poor decisions, like that period where we integrated YouTube comments and G+ posts, requiring YouTube commenters to create G+ accounts — managing to upset both communities in the process. But you know the motto — move fast and break things!
Now we just want to get out from under Google+. And you’re going to be the collateral damage. Please understand that it’s nothing personal. It’s just business.
So we’re shutting down G+. We’ll be shutting it down this coming August, uh April, uh as soon as we can locate the Google+ SRE in charge. We’ve been trying to page them for months but they’re not answering. We’re pretty sure that there’s a G+ control dashboard in our systems somewhere — when we find it we’ll pull the switch and you’ll all be history.
We could yank your chains and claim that killing G+ is all about poor engagement and API problems and whatnot, but we know you’d see through that, and frankly we just don’t want you around anymore. You’re more trouble than you’re worth to a firm that is pivoting ever more toward serving businesses who actually pay us with actual money. Of course, many businesses now claim that they’ve lost faith in us due to our behavior killing services and mistreating users on the consumer side, but we’ll throw them some usage credits and they’ll come around. You can always buy user trust!
The ad business just isn’t what it used to be. We need new users in new places! Governments are breathing down our necks, ad blockers are reducing ad impressions and conversions, and a bunch of would be do-gooders are making a fuss about our plans to set up a censored search engine in China. You know how many Chinese are in China? More than you can count on your fingers and toes, believe us!
And speaking of business, we’ll be continuing G+ over on our enterprise/business products, at least until it becomes inconvenient for us to keep doing so. And before you ask, no, you can’t pay for continued access to consumer G+ or bundle it with Google One, and you can’t have a pony or anything like that. Get this through your heads. You’re not our target users or target demographics. We just don’t care about you.
Now, after we’ve said all that, we hope that you won’t get too upset if we ask for your help in killing off G+ with a minimum of public attention from bloggers and the media.
Since we routinely provide the means for you to download your data from Google, you can download your G+ posts before we drive a stake through the heart of the G+ data center clusters. We don’t know what the hell you’re going to do with that data, since you’re going to lose contact with all your followers and friends you’ve built up over the years on G+, but did you really expect us to bother providing a tool to help you stay in contact with them after G+ is tossed into the dumpster? We recommend that you just forget about those people, like we’re forgetting about you. It’s easy with practice.
Oh, here’s another thing. You might expect that with the shutdown of G+ so close, we wouldn’t still be soliciting for new G+ users, and you might think that we’d have “butter bar” banners up warning users of the shutdown and providing continuing updates. You might expect us to email G+ users about what’s going on.
But, c’mon, you know us better than that. Remember, we just don’t care, so there are no banners, no continuing informational updates, and — get this! — we’re still soliciting for new G+ users to sign up, without so much as giving them a clue that they’re signing up for a service that is “dead man walking” already! The poor ignorant slobs! Pretty funny, huh? And the only users we’ve emailed about the G+ shutdown are at sites using our G+ APIs, which we’re going to start dismantling in late January. It’s going to be quite a show, because that’s going to break vast numbers of websites that made the mistake of deeply embedding G+ APIs into their systems. Hey, to quote “Otter” from “Animal House” — “You f*cked up! You trusted us!”
So it’s up to you all to spread the word about what’s going on, because we’ve got better things to do than dealing with G+ losers. You’re so yesterday!
OK, ’nuff said! We’ve already spent more time on this note than we should have, and talking to you guys isn’t advancing any of our careers. Be glad that we’re posting this in a nice dark font that you can actually read — we could have used “Material Design” and then sat here chuckling, knowing that so many of you would be squinting and getting migraine headaches from trying to read this.
But we’re not cruel. We just don’t care about you. There’s a big difference! Please keep that in mind.
Thanks for being the guinea pigs in our social media experiment that was Google+. Now back to your cages!
Best,
Google, Inc.
– – –
Lauren Weinstein / lauren.vortex.com / 22 January 2019 / https://plus.google.com/+LaurenWeinstein / https://twitter.com/laurenweinstein
Google Contacts — which I use heavily — has now moved over to Google’s horrific “let’s kick people with less than perfect vision in the teeth!” user interface (UI) design. I assume it’s rolling out gradually so you may not have it yet.
But even when you do get it, you STILL may not be able to really see it, because like most of Google’s “material design” UI “refreshes” it’s terrible for anyone who has problems with low contrast fonts. Even at 175% magnification, the fonts are painful to read — and for many users are likely to be impossible to view in a practical manner. And as usual, older users will suffer most at the hands of Google’s UI design changes.
There are a few minor improvements in the new Contacts design relating to form field layouts, and your “notes” for an entry no longer need to be in a restricted-sized box. But those positive changes are rendered meaningless when the fonts overall have been made so much more difficult for so many people to read.
If you talk to Google’s internal accessibility folks about this sort of problem (and I’ve done so, numerous times) you’ll be told that the new design is fine for “most users” and meets formal accessibility standards.
Yet the single most common complaint I get about Google is from users who simply can’t comfortably read or use Google interfaces, and Google is pushing material design into more and more of their products. Google Docs (I use this one heavily also), plus Sheets, Slides, and Sites are also apparently doomed to undergo this change, according to Google.
For the moment, you can still switch back to the familiar version of Contacts (there’s a link for this buried at the bottom of the left sidebar), but we know that Google at some point always ultimately removes the ability to use the older versions of their products.
This situation is rapidly becoming worse and worse for the negatively affected users.
Of course, Google could solve this problem by providing higher contrast UI options, but such options are severely discouraged at Google.
After all, you don’t want to make things easy for those users that you don’t really care about at all, right?
For shame Google. For shame.
–Lauren–
UPDATE (February 10, 2019): Another Positive Move by YouTube: No More General "Conspiracy Theory" Suggestions
When I feel that Google is making policy mistakes, I don’t hesitate to call them out as appropriate. I don’t enjoy doing this, but my goal is to help Google be better, not to see a great company becoming less so.
On the other hand, I much enjoy congratulating Google when they make important policy improvements — and yeah, it’s nice when this involves an area where I’ve long been urging such changes.
So I’m very pleased by Google’s newly announced changes to YouTube acceptable content rules, to significantly crack down on dangerous prank and dare/challenge videos on YouTube.
I’ve written about my concerns in this area many times, for example in “YouTube's Dangerous and Sickening Cesspool of ‘Prank’ and ‘Dare’ Videos” (https://lauren.vortex.com/2017/05/04/youtubes-dangerous-and-sickening-cesspool-of-prank-and-dare-videos), approaching two years ago.
I am not unsympathetic to Google’s philosophical and practical preferences for a “very light touch” when it comes to excluding specific types of content from their YouTube platform. In a perfect world, if all video creators behaved responsibly in the first place, we likely wouldn’t be facing these kinds of challenges at all. But of course, the reality is that irresponsible creators of all sorts permeate vast swaths of the Internet ecosystem.
The new YouTube “Policies on harmful or dangerous Content” (https://support.google.com/youtube/answer/2801964), should in theory go a long way toward appropriately addressing the kinds of concerns that I and others have expressed about dangerously inappropriate videos on YouTube.
Whether the new rules will actually have the desired positive effects will of course depend on how rigorously Google enforces these rules, and in particular whether that enforcement is evenhanded — meaning that large YouTube channels generating significant revenue are subject to the same serious enforcement actions as much smaller channels.
Time will tell in this regard. But today, as someone who very much loves YouTube and who considers YouTube to be an irreplaceable aspect of my daily life, I want to thank Google for these positive steps toward making YouTube even better for us all. Kudos to the teams!
–Lauren–
If you’ve ever needed more evidence that Google just doesn’t care about users who have become “inconvenient” to their new business models, one need only look at the saga of their ongoing handling of their announced Google+ shutdown.
I’ve previously discussed what I believe to be the actual motivations for this action, that’s suddenly pulling the rug out from beneath many of their most loyal users (“Can We Trust Google?” – https://lauren.vortex.com/2018/12/10/can-we-trust-google). But let’s leave the genesis of this betrayal of users aside, and just look at how Google is handling the actual process of eliminating G+.
What’s the technical term for this that I’m searching for? Oh yes: disgraceful.
We already know about Google’s incredible user trust failure in announcing dates for this process. First it was August. Then suddenly it was April. The G+ APIs (which vast numbers of web sites — including mine — made the mistake of deeply embedding into their sites, we’re told will start “intermittently failing” (whatever that actually means) later this month.
It gets much worse though. While Google has tools for users to download their own G+ postings for preservation, they have as far as I know provided nothing to help loyal G+ users maintain their social contacts — the array of other G+ followers and users with whom many of us have built up friendships on G+ over the years.
As far as Google is concerned, when G+ dies, all of your linkages to your G+ friends are gone forever. You can in theory try to reach out to each one and try to get their email addresses, but private messages on G+ have always been hit or miss, and I’ve had to resort to setting up my own invite-only forum for this purpose (“A New Invite-Only Forum for Victims of Google's Google+ Purge” – https://lauren.vortex.com/2019/01/05/a-new-invite-only-forum-for-victims-of-googles-google-purge).
If I’d been running G+ and had been ordered from “on high” to shut it down, I would have insisted on providing tools to help users migrate their social connections on G+ to other platforms, or at least to email! Google just doesn’t seem to care about the relationships that users have built over the years on G+.
You know what else I’d be doing if I ran G+ at this point? I’d be showing respect for my users. I’d be damned well warning everyone about the upcoming shutdown on a continuing basis — not just with an occasional post on G+ itself visible only to users following that official G+ user, and not relying on third-party media stories to inform the user community.
I’d have “butter bar” banners up keeping all G+ users informed. I’d be sending out emails to users updating them on what’s happening (so far as I know, only G+ API users have been contacted by email about the shutdown).
And with only a few months left until Google pulls the plug on G+, I sure as hell wouldn’t still be soliciting for new G+ users!
Yep — believe it or not — Google at this time is STILL soliciting for unsuspecting users to sign up for new G+ accounts, without any apparent warnings that you’re signing up for a service that is already officially the walking dead!
Perhaps this shows most vividly how Google today seems to just not give a damn about users who aren’t in their target demographics of the moment. Or maybe it’s just laziness. We can assume that consumer G+ is being operated on an ever thinner skeleton crew these days. Sure, encourage users to waste their time setting up profiles and subscribing to communities that will be ghosts in a handful of weeks. What do we care?
The upshot here though isn’t to suggest that Google is required to operate G+ forever, but rather that the way in which they’ve handled the announcements and ongoing process of sunsetting a service much beloved by many Google users has been nothing short of atrocious, and has not shown respect for Google’s users overall.
And that’s nothing short of very dismal, and very sad indeed.
–Lauren–
The casual outside observer can be readily excused for not noticing the multiplying red flags.
At first glance, so much seems golden for Google.
Google is still expanding its physical infrastructure by leaps and bounds. New buildings, new data centers, new offices — just last week we learned that Google will be taking over virtually the entire old Westside Pavilion for offices here in L.A. I used to hang out there many years ago, back when it was a relatively new shopping mall.
The pipeline of graduating students into Google’s HR machine remains packed to overflowing, and as usual there are vastly more applicants than positions available.
But to those of us with deeper connections to the firm and its employees, there are alarm bells sounding loudly.
Google is in the midst of a user trust and ethics crisis, and an increasing number of their best long-term employees are leaving.
Their reasons vary — after all, nobody is expected to stay with one firm forever, and there are career paths to be considered.
However, it is undeniable to anyone who really knows Google that there is an increasing internal glumness, a sense of melancholy and in some cases anger, toward some key decisions that management has been making of late, and regarding the predicted trajectory for Google that logically could result.
As at most firms, there has always been some degree of friction at Google between management and the “rank and file” employees — traditionally staying largely internal to the firm and out of public view.
This has changed recently, with a series of controversial internal issues spilling out dramatically into the external world, in the form of employee protests and other employee actions really never seen before in modern Big Tech workplaces.
Consternation over Google’s links to military projects, a potential censored search project for China, and a massive payout to a high-ranking employee accused of sexual harassment — the world at large has taken note of these issues and more.
Just in the last few days, a major shareholder lawsuit has been filed against Google relating to the sexual harassment case. And coincidentally a couple of days ago, the Arms Control Association named the 4000 Googlers who opposed Google’s contract with the Pentagon’s “Project Maven” as the “Arms Control Person(s) of the Year.”
There have indeed been some positive internal changes at Google resulting from this unprecedented level of employee activism — for example, Google has formalized an important and positive set of AI Principles.
For many Googlers, this has been too little, too late. Particularly among female and LGTBQ employees — but by no means restricted to those groups — the atmosphere at Google is no longer seen as welcoming and ethical. And increasing numbers of Googlers — alarmingly including those who have been at Google for many years, who have been the representatives of Google’s culture at its best, and who have constituted the ethical heart of the company — have left or are about to leave.
And this appears to be only the beginning. I’ve lost count of the Googlers I know who have asked me to keep an ear open for outside positions that fall into their areas of expertise — a bit ironic since I’m always looking for work myself.
These kinds of situations can be devastating to a firm in the long run, in and of themselves.
They also hand Google’s political and other enemies — the haters and more — political ammunition that can be used against Google not only to the detriment of the firm at a time when Big Tech is increasingly being inappropriately framed as “enemies of the people” by Luddite forces on the left and the right — but to the ultimate detriment of Google’s users and everyone else as well.
Yet compared to Google’s competition — for example firms like Amazon and Microsoft who happily accept military combat contracts, or Apple with its highly problematic actions to help China block open Internet access by removing VPN and other apps — Google’s ethics have traditionally been a cut above the others.
As Google’s brain and ethics drains continue, as more of their best and most principled employees leave, Google’s moral advantage over those other firms is rapidly deteriorating, and the exodus of such employees is always a “canary in the coal mine” warning that something fundamental has gone awry.
So long as Google management chooses not to directly and effectively address these issues, to not dedicate significant resources toward reclaiming the ethical, user trust, and employee trust high grounds, there is little reason to anticipate a course correction from the increasingly dark path on which Google now appears to be traveling.
–Lauren–
I’ve been highly critical — to say the least — of the European Union’s insane global censorship regime — “The Right To Be Forgotten” (RTBF) — since well before it became actual, enacted law.
But there’s finally some good news about RTBF — in the form of a formal opinion from EU Advocate General Maciej Szpunar, chief adviser at Europe’s highest court.
I’m not sure offhand when I first began writing about the monstrosity that is RTBF, but a small subset of related posts includes:
The “Right to Be Forgotten”: A Threat We Dare Not Forget (2/2012):
https://lauren.vortex.com/archive/000938.html
Why the “Right To Be Forgotten” is the Worst Kind of Censorship (8/2015):
https://lauren.vortex.com/archive/001119.html
RTBF was always bad, but it became a full-fledged dumpster fire when (as many of us had predicted from the beginning) efforts were made to enforce its censorship demands globally. This gave the EU effectively worldwide censorship powers via RTBF’s “hide the library index cards” approach, creating a lowest common denominator “race to the bottom” of expanding mass, government-directed censorship of search results related to usually completely accurate and still published news and other information items.
In a nutshell, Maciej Szpunar’s opinion — which is not binding but is likely to be a strong indicator of how related final decisions will turn out — is that global application of EU RTBF decisions is usually unreasonable. While he doesn’t rule out the possibility of global “enforcement” in “certain situations” (an aspect that will need to be clarified), it’s obvious that he views routine global enforcement of EU RTBF demands to be untenable.
This is of course only a first step toward reining in the RTBF monster, but it’s potentially an enormously important one, and we’ll be watching further developments in this arena with great interest indeed.
–Lauren–
Have you ever seen the “10 Things” philosophy page at Google? It’s uplifting. It’s sweet. And in significant respects, it’s as dead as the dodo:
https://www.google.com/about/philosophy.html
Even if it didn’t say so, you’d know that this page has been around at Google for a long, long time, because it still speaks of “doing one thing really, really well” and calls Gmail and Maps “new” products.
By no means is everything on that page now inoperative, but it’s difficult for some sections not to remind one of the classic film “Citizen Kane” where Charles Foster Kane himself rips his own, now “antique” Declaration of Principles to shreds.
Point number one on that nostalgic Google page is of special note: “Focus on the user and all else will follow.”
I would argue that when those words were first written many years ago, Google’s users — and the entire Internet world — were very different from today. By and large, the percentage of non-techies in Google’s user community was much smaller. You didn’t have so many busy non-technical persons, older people, and others for whom technology was not a 24/7 “lifestyle” but who were still very dependent on your services.
And of course, Google’s range of services was much narrower then, and Google services were not such a massive part of so many people’s lives around the world as those services are today.
Google has traditionally been — and still to a significant extent is — something of a “black box” to most users. Unless you’ve been on the inside, many of its actions seem mysterious and inscrutable. Even being on the inside doesn’t necessarily free one completely of those observations.
While there have been some improvements in some respects, especially in regard to Google’s paid services, overall Google still seems to have something of an “us vs. them” attitude — keep the users at arm’s length — when it comes to the majority of their users, a tendency to wall users off in significant respects.
Granted, when you have as many users as Google, you can’t provide “white-glove” personalized service to all of them.
But even within the practical range of what could be done to better serve users overall, one senses that Google decreasingly cares about you unless you’re a genuine paying customer, and even then only to the minimal extent required.
Part of this is likely driven by quite realistic fears of potentially draconian actions by pandering politicians in governments around the planet, and the declining value of traditional online advertising models.
But Google’s at best lackadaisical attitude toward so many of its users is still impossible to justify. Just to note two recent examples that I’ve discussed, why would Google not choose to proactively help Chromecast users whose devices might be hijacked, even if the underlying fault wasn’t actually Google’s? And how can Google justify the sudden and total abandonment of loyal Google+ users who have spent many years building close communities, without even bothering to provide any tools to help those users stay in touch with each other after Google pulls the plug?
It’s a matter of priorities. And at Google, only a limited number of particular users tend to be a priority.
It goes further of course. Google’s institutional fear of the “Streisand Effect” — reluctance to even mention a problem to avoid risking drawing any attention to it — rises essentially to the level of neurosis.
Google’s continual refusal to give users a truly representative “place at the deliberation table” through user advocates, or the means to escalate serious dilemmas through ombudspersons or similar roles, are ever more glaring as related issues continue to erupt into public notice, often with significantly negative PR impacts, making Google ever more vulnerable to the whims of opportunistic regulators and politicians.
Some years ago when I was consulting to Google, I was in the office of a significantly high ranking executive at their Mountain View headquarters (one clue to knowing if someone is a significant executive at Google — they have their own office). I was pitching my concepts for roles like ombudspersons, and he was pushing back. Finally, he asked me, “Are you volunteering?”
I thought about it for a few seconds and answered no. A role like that without the actual support of the company would be useless, and it seemed obvious from my meetings that the necessary support for such roles within the company did not exist.
In retrospect, even though I’ve always assumed that his question was really only meant rhetorically, I still wonder if I should have “called his bluff” so to speak and answered in the affirmative. It probably wouldn’t have mattered, but it was an interesting moment.
One way or another, the political “powers that be” today have the long knives out for Google and other Internet-based firms. And I for one don’t want to see Google go the way of DEC and Bell Labs and the long list of other firms that once seemed invincible but now either no longer exist or are mere shadows of their former once-great selves.
Given current trends, I’m unsure if Google — even given the will to do so — can turn this around fast enough to avoid the destructive, toxic, political freight trains headed toward it. Many of my readers frequently suggest to me that even that sentiment is overly optimistic.
We shall see.
–Lauren–
Several weeks ago, in the wake of Google’s shameless and hypocritical abandonment of loyal Google users and communities with the announced rapidly approaching shutdown of consumer Google+ (originally scheduled for August, then — with yet another kick in the teeth to their users — advanced to April based on obviously exaggerated security claims) I created a new private forum to help stay in touch with my own G+ followers.
This was not something that I had anticipated needing to do.
If Google had shown even an ounce of concern for their users’ feelings, and provided the means for the “families” of users created on G+ since its inception to have some way to stay in touch after Google pulls the plug on consumer G+ (to concentrate on expanding their enterprise/business version of G+), I wouldn’t even have had to think about creating a new forum at this stage.
But relying upon Google in these respects — please see: “Can We Trust Google?” (https://lauren.vortex.com/2018/12/10/can-we-trust-google) — is a fool’s errand. Google has made it clear that even their most loyal users can be booted out the door at any time that upper management finds them to be an “inconvenience” in the Google ecosystem, to be swatted like flies. Given Google’s continuing user support and user trust failures in other areas, we all should have seen this coming long ago. In fact, many of us did, but had hoped that we were wrong.
There have been continuing efforts to find some way in conjunction with Google to keep some of these consumer G+ relationships alive — for example, via the enterprise version of G+. To date, these prospects continue to appear bleak. Google seems to have no respect at all for their consumer G+ users, beyond the absolute minimum of providing a way for users to download their own G+ posting archives.
Since Google clearly cares not about destroying the relationships built up on Google+, and since I have many friends on G+ with whom I don’t want to lose touch (many of which, ironically, are Googlers — great Google employees), I created my own small, new private forum as a way to hopefully avoid total decapitation of these relationships at the hands of Google’s G+ guillotine.
A significant number of my G+ followers have already joined. But I’ve been frequently asked if I would consider opening it up further for other G+ users who feel burned by Google’s upcoming demolition of G+, especially since many G+ users are not finding the currently publicly available alternatives to be appealing, for a range of very good reasons. Facebook is nonstarter for many, and various of the other public alternatives are already infested with alt-right and other forms of trolls who were justifiably kicked off of the mainstream platforms.
So while I am indeed willing to accept invitation requests more broadly from G+ users and other folks who are feeling increasingly without a welcoming social media home, please carefully consider the following before applying.
It’s my private forum. My rules apply. It operates as a (hopefully) benign dictatorship. I reserve the right to reject any invite applications or submitted postings. Any bad behavior (by my definitions) will result in ejection, typically on a one-strike basis. All submitted posts will be moderated (by myself and/or by trusted users whom I designate) before potentially being accepted and becoming visible on the forum. Private messaging between users is not supported at this time. I make no guarantees regarding how long the forum will operate or how it might evolve, but my intention is for it to be a low-key and comfortable place for friends to post and discuss issues of interest.
If you don’t like that kind of environment, then please don’t even bother applying for an invitation. Go use Facebook. Or go somewhere else. Good luck. You’re going to need it.
If you do want to apply for an invitation, please send an email message explaining briefly who you are and why you want to join, to:
I look forward to hearing from you.
Take care. Be seeing you.
–Lauren–
You may have heard by now that significant numbers of Google’s excellent Chromecast devices — dongles that attach to televisions to display video streams — are being “hijacked” by hackers, forcing attached televisions to display content of the hackers’ choosing. The same exploit permits other tampering with some users’ Chromecasts, including apparently forced reboots, factory resets, and configuration changes. Google Home devices don’t seem to be similarly targeted currently, but they likely are similarly vulnerable.
The underlying technical vulnerability itself has been known for years, and Google has been uninterested in changing it. These devices use several ports for control, and they depend on local network isolation rather than strong authentication for access control.
In theory, if everyone had properly configured Internet routers with bug free firmware, this authentication and control design would likely be adequate. But of course, everyone doesn’t fall into this category.
If those control ports end up accessible to the outside world via unintended port forwarding settings (the UPnP capability in most routers is especially problematic in this regard), the associated devices become vulnerable to remote tampering, and may be discoverable by search engines that specialize in finding and exposing devices in this condition.
Google has their own reasons for not wanting to change the authentication model for these devices, and I’m not going to argue the technical ramifications of their stance right now.
But the manner in which Google has been reacting to this new round of attacks on Chromecast users is all too typical of their continuing user trust failures, others of which I’ve outlined in the recent posts “Can We Trust Google?” (https://lauren.vortex.com/2018/12/10/can-we-trust-google) and “The Death of Google” (https://lauren.vortex.com/2018/10/08/the-death-of-google).
Granted, Chromecast hijacking doesn’t rank at the top of exploits sorted by severity, but Google’s responses to this situation are entirely characteristic of their attitude when faced with such controversies.
To date — as far as I know — Google has simply taken the “pass the buck” approach. In response to media queries about this issue, Google insists that the problem isn’t their fault. They assert that other devices made by other firms can have the same vulnerabilities. They lay the blame on users who have configured their routers incorrectly. And so on.
While we can argue the details of the authentication design that Google is using for these devices, there’s something that I consider to be inarguable: When you blame your users for a problem, you are virtually always on the losing side of the argument.
It’s as if Google just can’t bring itself to admit that anything could be wrong with the Chromecast ecosystem — or other aspects of their vast operating environments.
Forget about who’s to blame for the situation. Instead, how about thinking of ways to assist those users who are being affected or could be affected, without relying on third-party media to provide that kind of help!
Here’s what I’d do if I was making these decisions at Google.
I’d make an official blog post on the appropriate Google blogs alerting Chromecast users to these attacks and explaining how users can check to make sure that their routers are configured to block such exploits. I’d place something similar prominently within the official Chromecast help pages, where many users already affected by the problem would be most likely to initially turn for official “straight from Google” help.
This kind of proactive outreach shouldn’t be a difficult decision for a firm like Google that has so many superlative aspects. But again and again, it seems that Google has some sort of internal compulsion to try minimize such matters and to avoid reaching out to users in such situations, and seems to frequently only really engage publicly in these kinds of circumstances when problems have escalated to the point where Google feels that its back is against the wall and that they have no other choice.
This isn’t rocket science. Hell, it’s not even computer science. We’re talking about demonstrating genuine respect for your users, even if the total number of users affected is relatively small at Google Scale, even if the problems aren’t extreme, even if the problems arguably aren’t even your fault.
It’s baffling. It’s disturbing. And it undermines overall user trust in Google relating to far more critical issues, to the detriment of both Google itself and Google’s users.
And perhaps most importantly, Google could easily improve this situation, if they chose to do so. No new data centers need be built for this purpose, no new code is required.
What’s needed is merely the recognition by Google that despite their great technical prowess, they have failed to really internalize the fact that all users matter — even the ones with limited technical expertise — and that Google’s attitude toward those users who depend on their services matters at least as much as the quality of those services themselves.
–Lauren–
When small, closed minds tackle big issues, the results are rarely good, and frequently are awful. This tends to be especially true when governments attempt to restrict the development and evolution of technology. Not only do those attempts routinely fail at their stated and ostensible purposes, but they often do massive self-inflicted damage along the way, and end up further empowering our adversaries.
Much as Trump’s expensive fantasy wall (“Mexico will pay for it!”) would have little ultimate impact on genuine immigration problems — other than to further exacerbate them — his Commerce department’s new plans for restricting the export of technologies such as AI, speech recognition, natural language understanding, and computer vision would be yet another unforced error that could decimate the USA’s leading role in these areas.
We’ve been down this kind of road before. Years ago, the USA federal government placed draconian restrictions on the export of encryption technologies, classifying them as a form of munitions. The result was that the rest of the world zoomed ahead in crypto tech. This also triggered famously bizarre situations like t-shirts with encryption source code printed on them being restricted, and the co-inventor of the UNIX operating system — Ken Thompson — battling to take his “Belle” chess-playing computer outside the country, because the U.S. government felt that various of the chips inside fell into this restricted category. (At the time, Ken was reportedly quoted as saying that the only way you could hurt someone with Belle was by dropping it out of a plane — you might kill someone if it hit them!)
As is the case with AI and the other technologies that Commerce is talking about restricting today, encryption R&D information is widely shared among researchers, and likewise, any attempts to stop these new technologies from being widely available, even attempts at restricting access to them by specific countries on our designated blacklist of the moment, will inevitably fail.
Even worse, the reaction of the global community to such ill-advised actions by the U.S. will inevitably tend to put us at a disadvantage yet again, as other countries with more intelligent and insightful leadership race ahead leaving us behind in the dust of politically motivated export control regimes.
To restrict the export of AI and affiliated technologies is shortsighted, dangerous, and will only accomplish damaging our own interests, by restricting our ability to participate fully and openly in these crucial areas. It’s the kind of self-destructive thinking that we’ve come to expect from the anti-science, “build walls” Trump administration, but it must be firmly and completely rejected nonetheless.
–Lauren–
It now seems unlikely that Google will be proceeding anytime soon with their highly controversial “Dragonfly” project to provide Chinese government-controlled censored search services in China. The project has become politically radioactive — odds are that any attempt to move forward would result in overwhelming bipartisan blocking actions by Congress.
But this doesn’t mean that Google can — or that they should — leave China. About 20% of the global population is within Chinese territorial boundaries, well over a billion human beings. Even if it were financially practical to do so (which it isn’t), we cannot ethically abandon them.
Our ethical concerns with China are not with the Chinese people, they’re with the oppressive, dictatorial Chinese government.
In fact, if you ever deal directly with Chinese individuals, you’ll generally find them to be among the greatest folks you’ve ever encountered. Even if your experience is only with the multitude of Chinese-operated stores on eBay, it’s routine to receive superb customer service that puts many U.S.-based firms to shame.
So the dilemma — not just for Google but for all of us in dealing with China — is how to best serve the people of China, without directly supporting China’s totalitarian regime and their escalating and serious mass human rights abuses.
Obviously, it’s impossible to completely compartmentalize these two aspects of the problem, but there are some fairly obvious guidelines that we can apply.
Joint research projects with China — for example, in areas such as machine learning and artificial intelligence — is one category that will generally make sense to pursue, even though we realize that the fruits of such work can be used in negative ways.
But realistically, this is true of most research by humankind throughout history, and joint research projects can at the very least provide valuable insight into important work that might not otherwise be surfaced to domestic researchers.
On the other hand, participation in operational Chinese systems that wage war and/or directly further the oppression of the Chinese people should be absolutely off the table. This is the dangerous category into which Dragonfly would ultimately have resided, because the Chinese government’s vast censorship apparatus is a foundational and crucial aspect of their maintaining oppressive control over their population.
The fact that the vast majority of common queries under Dragonfly might not have been censored is irrelevant to the concerns at hand. It’s those crucial other Dragonfly queries —- censored by order of the Chinese dictators — that would drag this concept deep into an unacceptable ethical minefield.
These are but two examples from a complex array of situations relating to China. Neither Google nor the rest of us can or should disengage from China. But the specific ways in which we choose to work with China are paramount, and it is incumbent on us to assure that such projects always pass reasonable ethical muster.
As usual with so much in life, as the old saying goes (and the Chinese probably said it first) — the devil is in the details.
–Lauren–
If you’re a regular reader of my missives, you know that one of my continuing gripes with Google — going back many years — relates to their continuing failures to devise a system to deal appropriately with user problems in need of support escalation.
I have enormous respect for Google — a great company — but their bullheaded refusal to consider solutions that so many firms have found useful in these regards, such as ombudspersons and user advocates, is a source of continuing deep disappointment.
I’ve written about these issues so very many times over the years that I’m not going to repeat myself here, beyond saying that the usual excuse one hears — that people using free services should expect to get the level of service that they’re paying for — is not an acceptable one for services that have become so integral to so many people’s lives.
But it goes way beyond this. Escalation failures are common even with users of Google’s paid business services, and for major YouTube creators in monetary relationships with Google.
In fact, YouTube-related problems are near the top of the list of why users come to me asking for help with Google issues. Sometimes I can help them, sometimes I can’t. Either way, this isn’t something I should need to be doing from the outside of Google! Google needs to have dedicated employee roles for these escalation tasks.
I won’t here plow again over the ground that I’ve covered in the past regarding YouTube problems with Content ID and false ownership claims, and the desperation of honest YouTube creators who get crunched between the gears of YouTube’s claim/counterclaim machinery.
Rather, I’ll point to a particularly vivid very recent story of a YouTube creator who had his video (monetized with over 47 million views), ripped out from under him by someone with no actual ownership rights, and the Kafkaesque failures of Google to deal with the situation appropriately.
This case is all the more painful since this creator had enough subscribers that he had a YouTube “liaison” (something most YouTube creators don’t have, of course), but YouTube’s procedures failed so badly that even this didn’t help him. I recommend that you watch his video explaining the situation (posted just five days ago, it already has over two million views):
“How my video with 47 million views was stolen on YouTube” – https://www.youtube.com/watch?v=z4AeoAWGJBw
And keep in mind, as he points out himself, this is far from an isolated kind of case.
Google knows what’s necessary to fix these kinds of situations. You start by hiring an ombudsperson, user advocate, or create some similar dedicated roles with genuine responsibility within the firm.
Google continues to fight these concepts, and the longer that they do so, the more that they risk trust in Google being further diminished and eventually decimated.
–Lauren–
Recently in “Can We Trust Google?” (https://lauren.vortex.com/2018/12/10/can-we-trust-google), I explored the question of whether Google should be considered to be a reliable partner to consumers or businesses, given the manner in which Google all too frequently makes significant changes to their products without documenting associated user interface and other related issues appropriately.
Even worse, Google has a long history of leaving users out in the cold when Google abruptly decides to kill products, often with inadequate or questionable claimed justifications.
Google has taken such actions again and again, most recently with the consumer version of Google+ — whose users represent among Google’s most loyal fans. Today, Google announced that G+ APIs will start to break in January — causing vast numbers of active sites and archives which depend on them for various display elements (including some of my own sites) to turn into graphical garbage without significant and time-consuming modifications.
Meanwhile, Google is speeding ahead with their total shutdown of consumer G+, on their new accelerated schedule that suddenly took months off of their originally announced rapid shutdown timetable.
If this all isn’t enough of a kick in the teeth to Google fans, Google continues extolling the virtues of the new G+ features that they plan for enterprises — for businesses — which apparently will be continuing and expanding even as the consumer side is liquidated.
But I wonder how long enterprise G+ will actually last? So many business people have contacted me noting that they no longer are willing to entrust long-term or mission critical applications to Google, because they just don’t trust that Google can be depended upon to maintain products into the foreseeable future. These entrepreneurs fear that they’re going to end up being ground up in the garbage disposal just like Google’s consumer users so often are, when Google products are pulled out from under them.
This goes far beyond Google+. These issues permeate the way Google treats both consumer and business users — very much as if they were disposable commodities, where only the largest demographic groups mattered at all.
I am a tremendous fan of Google and Googlers. But I’m forced to agree that at present it’s difficult to recommend Google as a stable resource for businesses that need to plan further than relatively short periods into the future.
For business planning purposes, all of that great Google technology is effectively worthless if you can’t depend on it being stable and still being available even a few short years from now.
For all the many faults of firms like Microsoft and Amazon — and I’m no friend of either — both of them seem to have learned that businesses need stability above all — a lesson that Google still doesn’t seem to have really internalized.
Both Amazon and Microsoft seem to understand that the ways in which you treat the users of your consumer products will reflect mightily on business’ decisions about adopting your enterprise products and services. For all of their vast technological expertise, Google seems utterly clueless regarding this important fact.
When I mentioned recently that I still believed it possible for Google to turn this situation around, I received a bunch of responses from readers suggesting that I was wrong, that Google will never make the kinds of changes that would truly be necessary.
I will continue to try help folks with Google-related issues to the maximal extent that I can. But I sure hope that my optimistic view regarding Google’s ability to change isn’t proven to be painfully incorrect in the end.
–Lauren–
During a radio interview a few minutes ago, I was asked for my opinion regarding Google CEO Sundar Pichai’s hearing at Congress today.
There’s a lot that can be said about this hearing. Sundar confirmed that Google does not plan to go ahead with a Chinese government censored search engine — right now.
Most of the hearing involved the ridiculous, false continuing charges that Google’s search results are politically biased — they’re not.
But relating to that second topic, I heard one of the scariest demands ever uttered by a member of the U.S. Congress.
Rep. Steve King (R-Iowa) wants Google to hand over to Congress the identities of the Googlers whose work relates to search algorithms. King made it clear that he wants to examine these private individuals’ personal social media postings, his direct implication being that showing a political orientation in your personal postings would mean that you’d be incapable of doing your work on search in an unbiased manner.
This is worse than wrong, worse than stupid, worse than lunacy — it’s outright dangerous McCarthyism of the first order.
Everything else that occurred in that hearing pales into insignificance compared with King’s statement.
King continued by threatening Google with various punitive actions if Google refuses to agree to his demand regarding Google employees, and also to turn over the details of how the Google search algorithms are designed — which of course Congress would leak — setting the stage for search to be gamed and ruined by every tech-savvy wacko and crook.
Steve King has a long history of crazy, racist remarks, so it’s no surprise that he also rants into straitjacket territory when it comes to Google as well.
But his remarks today regarding Google were absolutely chilling, and they need to be widely and vigorously condemned in no uncertain terms.
–Lauren–
Can We Trust Google?
https://lauren.vortex.com/2018/12/10/can-we-trust-google
The DATA Says: Google’s “Dragonfly” Chinese Search Is Doomed
https://lauren.vortex.com/2018/11/28/the-data-says-googles-dragonfly-chinese-search-is-doomed
Save Google — but Let Facebook Die
https://lauren.vortex.com/2018/11/22/save-google-but-let-facebook-die
After the Walkout, Google’s Moment of Truth
https://lauren.vortex.com/2018/11/03/after-the-walkout-googles-moment-of-truth
Beware of “Self-Selected” Surveys of Google Employees
https://lauren.vortex.com/2018/10/30/beware-of-self-selected-surveys-of-google-employees
Why Internet Tech Employees Are Rebelling Against Military Contracts
https://lauren.vortex.com/2018/10/15/why-internet-tech-employees-are-rebelling-against-military-contracts
The Death of Google
https://lauren.vortex.com/2018/10/08/the-death-of-google
–Lauren–
I consider Google to be a great company. I have many friends who are Googlers. I am dependent on many Google services and products.
But if you’ve gotten the sense that Google has been flailing around in a seemingly uncoordinated fashion lately, like a chainsaw run wild, you’re not the only one. And I’m not talking right now about their nightmare “Dragonfly” Chinese censorship project or the righteous rising tide of their own employees’ protests.
Let’s talk about the users. Let’s talk about you and me.
Some of Google’s management decisions are chopping Google’s most loyal users to figurative bloody bits.
Google has fantastic engineering teams, world-class privacy and security teams, brilliant lawyers, and so many other wonderful human and technical resources — yet Google’s upper management apparently still hasn’t really grown up.
To put it bluntly, Google management in key respects treats ordinary users like disposable bathroom paper products, to be used and quickly disposed of without significant consideration of the ultimate impacts.
There’s a site out on the Web that calls itself the Google Graveyard — they list all the Google services that have appeared and then unceremoniously vanished over the years, leaving seas of disappointed and upset users in their wake.
Today Google apparently announced that they’re pushing up the death date for consumer Google+ to April. Just recently they said it was going to be next August, so loyal G+ users — and don’t believe the propaganda, there are vast numbers of them — were planning on the basis of that original date. Google is simultaneously citing a new minor G+ security bug and is apparently using that as an excuse. But we know that’s bogus, because Google simultaneously notes that this minor bug only existed for less than a week and there was no evidence of it being exploited.
Google just wants to dump its social media users who aren’t on YouTube. No matter the many years that those users on G+ have spent building up vibrant communities on the platform. We know Google isn’t killing the essential G+ technical infrastructure, since they plan to continue it for their enterprise (paying) customers.
Who knows, maybe Google will next announce that consumer G+ will shut down 48 hours from now.
Let’s face it, you simply cannot depend on Google honorably even sticking to their own service shutdown dates and not pulling the plug earlier — users be damned! Who really cares about the impacts on those users, right?
You want another recent example? Glad you asked! Google over the last handful of days suddenly, and with no notification at all, started removing a feature from Google Voice, causing the way incoming calls are treated by the system to suddenly change for users employing that option in call screening. Because Google didn’t bother to notify any Google Voice users about this in advance, users only found out when their callers started expressing confusion about what was going on. I’m in useful discussions with the Google Voice team about this situation, and Google asserts that most users didn’t choose a mix of options that were affected by this.
But that’s not the point! For those users who did use that option set, this was a big deal, a major disruptive change that they were not told about (and in fact, still have not officially been informed about as far as I know), leaving them no opportunity to take reasonable proactive actions and limit the negative impacts.
The list of similarly affected Google products and services goes on and on. Google adds and removes features and changes user interfaces without warning, explanation, or frequently even any documentation. They kill off services — used by millions — on short notice, and even when they give a longer notice they may then suddenly chop months from that interval, as they have with G+.
Some might argue that users who don’t pay for Google services shouldn’t expect much more than nuthin’. But that’s garbage.
Vast numbers of persons depend on Google for many aspects of their lives. In many cases, they would happily pay reasonable fees for better support and some guarantees that Google won’t suddenly kill their favorite services! Innumerable people have told me how they’d happily pay to use consumer G+ or Google Voice under those conditions, and the same goes for many other Google services as well.
And yet, except for the limited offerings in “Google One” and media offerings like YouTube and Music premium services, essentially the only other way to pay for standard Google services is through Google’s “G Suite” enterprise model, which is domain-centric and far more appropriate for corporate users than for individuals.
Google knows that as time goes on their traditional advertising revenue model will become decreasingly effective. This is obviously one reason why they’ve been pivoting toward paid service models aimed at businesses and other organizations. That doesn’t just include G Suite, but great products like their AI offerings, Google Cloud, and more.
But no matter how technically advanced those products, there’s a fundamental question that any potential paying user of them must ask themselves. Can I depend on these services still being available a year from now? Or in five years? How do I know that Google won’t treat business users the same ways as they’ve treated their consumer users?
In fact, sadly, I hear this all the time now. Users tell me that they had been planning to move their business services to Google, but after what they’ve seen happening on the consumer side they just don’t trust Google to be a reliable partner going forward.
And I can’t blame folks for feeling this way. As the old saying goes, “Fool me once shame on you, fool me twice shame on me.”
The increasingly shabby way that Google treats consumer users in the respects that I’ve been discussing here has real world impacts on how potential business users view Google. The fact that Google has been continuing to pull the rug out from under their most loyal consumer users has not been lost on business observers, who know that even though Google’s services are usually technically superior, that fact alone is not enough to trust Google with your business operations.
Google works quite hard it seems to avoid thinking much about these negative impacts. That’s part of the reasons, I believe, why Google fights so hard against filling commonly accepted roles that so many firms have found to be so incredibly useful, such as ombudspersons, ethics officers, and user advocates.
In some ways, Google management still behaves as if Google was still a bunch of PCs stacked up in a garage. They still have not really taken responsibility for their important place in the world.
Personally, I still believe that Google can turn around this situation for the better. However, I am forced to admit that to date, I do not see significant signs of their being willing to take the significant steps and to make the serious changes necessary for this to occur.
–Lauren–
Google’s highly controversial “Dragonfly” project, exploring the possibility of providing Chinese-government censored and controlled search to China, is back in the news — with continuing protests by concerned Google employees, including public letters and other actions.
I have previously explained my opposition to this project and my solidarity with these Googlers, in posts such as: “Google Admits It Has Chinese Censorship Search Plans - What This Means” (https://lauren.vortex.com/2018/08/17/google-admits-it-has-chinese-censorship-search-plans-what-this-means) and other related essays.
There are a multitude of reasons to be skeptical about this project, ranging from philosophical to emotional to economic. Basic issues relating to freedom of speech and individual rights come into play when dealing with an absolute dictatorship that sends people to “reeducation” camps where they are tortured merely for having the “wrong” religions, or where making an “inappropriate” comment on the tightly-controlled Chinese Internet can result in authorities dragging you away to secret prisons.
There is also ample evidence to suggest that if Google proceeds to provide such search services in China, they will be mercilessly attacked by politicians from both sides of the aisle, many of whom already are in the ranks of the Google Haters.
But for the moment, let’s attempt to set such horrors and the politics aside, and look at Dragonfly in the cold, hard logic of available data. Google famously considers itself to be a “data-driven” company. Does the available data suggest that Dragonfly would be practical for Google to implement and operate going forward?
The answer is clearly negative.
Philosopher George Santayana’s notable assertion that: "Those who cannot remember the past are condemned to repeat it" is basically another way of saying “If you ignore the data staring you in the face, don’t be surprised when you get screwed.”
And the data regarding the probability of getting burned, screwed, or otherwise bulldozed by China is plentiful.
Google of course has plenty of specific data in hand about this. They tried providing censored search to China around a decade ago. The result was (as many of us predicated at the time) ever-increasing demands for more censorship and more control from the Chinese government, and then a series of Chinese-based hack attacks against Google itself, causing Google to correctly pull the plug on that project.
Fast forward to today, and Google management seems to be asserting that somehow THIS time it will all be different and work out just fine. Is there any data to suggest that this view is accurate?
Again, the answer is clearly no. In fact, vast evidence suggests exactly the opposite.
The optimistic assertions of Dragonfly proponents might have a modicum of validity if there were any evidence that China has been moving in a positive direction relating to speech and other human rights (in either or both of the technological and non-technological realms) in the years since Google’s original attempt to provide censored Chinese search.
But the data regarding China’s behavior over this period clearly demonstrates China moving in precisely the contrary direction!
China has used this time not to improve the human rights of its people, but to massively tighten its grip and to escalate its abuses in nightmarish ways. And especially to the point of this discussion, China’s ever more dictatorially monitored and controlled Internet has become a key tool in the government’s campaign of terror.
China has turned the democratic ideals of the Internet’s founders on their heads, and have morphed their own Internet into a bloody bludgeon to use against its own people, and even against Chinese persons living outside of China.
The reality of course is that China is an economic powerhouse — the West has already sold its economic soul to China to a major degree. There is no reversing that in the foreseeable future. Neither threats nor tariffs will make a real difference.
But we still do have some free choice when it comes to China.
And one specific choice — a righteous and honorable choice indeed — is to NOT get into bed with the Chinese dictators’ Internet control and censorship regime.
Giving the Chinese government dictators any control over Google search results would be effectively tantamount to embracing their horrific abuses — PR releases to the contrary notwithstanding.
The data — the history — teaches us clearly that there is no “just dipping your toe into the water” when it comes to collaboration with unrepentant, dictatorial regimes in the process of extending and accelerating their abuses, as is the case with China. You will not be able to make China behave any “better” through your actions. But you will inevitably be ultimately dragged body and soul into their putrid deeps.
The data is obvious. The data is devastating.
Google should immediately end its dance with China over Chinese censored search. Dragonfly and any similar projects should be put out of their miseries for good and all.
–Lauren–
Do you know why Facebook is called Facebook? The name dates back to founder Mark Zuckerberg’s “FaceMash” project at Harvard, designed to display photos of students’ faces (without their explicit permissions) to be compared in terms of physical attractiveness. Essentially, a way he and his friends could avoid dating “ugly” people by his definition. Zuck even toyed with the idea of comparing those student photos with shots of farm animals.
Immature. Exploitative. Verging on pre-echos of evils to come.
Fast forward to Facebook of today. As we’ve watched Zuckerberg’s baby expand over the years like a mutant virus from science fiction, we’ve had plenty of warnings that the at best amoral attitudes of Zuck and his hand-picked cronies have permeated the Facebook ecosystem.
It’s long been a given that Facebook ruthlessly controls, limits, and manipulates the data that users are shown — to its own financial advantage.
But long before we learned of Facebook’s deep embeds in right-wing politics, and the Russians’ own deep manipulative embeds in Facebook, there were other clues that Facebook’s ethical compass was virtually nonexistent.
Remember when it was discovered that Facebook was manipulating information shown to specific sets of users to see if their emotional states could be altered by such machinations without their knowledge?
Over and over again, Facebook has been caught in misstatements, in subterfuge, in outright lies — including the recent revelations of their paying an outside PR hit firm to fabricate attack pieces on other firms to divert attention from Facebook’s own spreading problems, even to the extent of the firm reportedly spreading false antisemitic conspiracy theories.
Zuck and Chief Operating Officer Sheryl Sandberg found an outgoing employee to fall on his sword to take official responsibility for this, and initially both Zuck and Sheryl publicly disclaimed any knowledge of that outside firm’s actions. But now Sheryl has apparently reversed herself, admitting that information about the firm did reach her desk. And do you really believe that control freaks like Mark Zuckerberg and Sandberg weren’t being kept informed about this in some manner all along? C’mon!
Facebook of course is not the only large Internet firm with ethical challenges. Recently in “The Death of Google” (https://lauren.vortex.com/2018/10/08/the-death-of-google), and “After the Walkout, Google's Moment of Truth” (https://lauren.vortex.com/2018/11/03/after-the-walkout-googles-moment-of-truth), I noted Google’s own ethical failings of late, and my suggestions for making Google a better Google. Importantly, those posts were not predicting Google’s demise, but rather were proposing means to help Google avoid drifting further from the admirable principles of its founding (“organizing and making available the world’s information” — in sharp contrast to Facebook’s seminal “avoid dating ugly people” design goal). So both of those posts regarding Google were in the manner of Dickens’ “Ghost of Christmas Future” — a discussion of bad outcomes that might be, not that must be.
Saving Google is a righteous and worthy goal.
Not so Facebook. Facebook’s business model is and has always been fundamentally rotten to its core, and the more that this core has been exposed to the public, the more foul the stench of rotten decay that Facebook emits.
“Saving” Facebook would mean helping to perpetuate the sordid, manipulative mess of Facebook today, that reaches back to its very beginnings — a creation that no longer deserves to exist.
In theory, Facebook could change its ways in positive directions, but not without abandoning virtually everything that has characterized Facebook since its earliest days.
And there is no indication — zero, none, nil — that Zuckerberg has any intention of letting that happen to his self-made monster.
So in the final analysis — from an ethical standpoint at least — there is no point to trying to “save” Facebook — not from regulators, not from politicians, and certainly not from itself.
The likely end of Facebook as we know it today will not come tomorrow, or next month, or even perhaps over a short span of years.
But the die has been cast, and nothing short of a miracle will save Facebook in the long run. And whether or not you believe in miracles, Facebook doesn’t deserve one.
–Lauren–
Some new studies are quantifying the levels of toxic emissions from conventional 3D printers using conventional plastic filaments of various types. The results are not particularly encouraging, but are not a big surprise. They are certainly important to note, and since I’ve discussed the usefulness of 3D printing many times in the past, I wanted to pass along some of my thoughts regarding these new reports. (Gizmodo’s summary is here: https://gizmodo.com/new-study-details-all-the-toxic-particles-spewed-out-by-3d-p-1830379464).
The big takeaways are pretty much in line with what we already knew (or at least suspected), but add some pretty large exclamation points.
PLA filament generally produces far fewer toxic emissions than most other filament compositions (especially ABS), and is what I would almost always recommend using in the vast majority of cases.
The finding that inexpensive filaments tend to have more emissions than “name brands” is interesting, probably related to levels of contaminants in the raw filament ingredients. However, in practice filament has become so fungible — with manufacturers putting different brand names on the same physical filament from the same factories — it’s often difficult to really know if you’re definitely buying the filament that you think you are. And of course, the most widely used filaments tend to be among the most inexpensive.
My own recommendation has always been to never run a 3D printer that doesn’t have its own enclosed build area air chamber (which the overwhelming vast majority don’t) in a room routinely occupied by people or animals — print runs can take many hours and emissions are continuing the entire time. Printing outside isn’t typically practical due to air currents and sudden temperature changes. A generally good location for common “open” printers is a garage, ideally with a ventilation fan.
The reported fact that filament color affects emissions is not unexpected — there has long been concern about the various additives that are used to create these colors. Black filament is probably the worst case, since it tends to have all sorts of leftover filament scraps and gunk thrown into the mix — the fact that black filament tends to regularly clog 3D printers is another warning sign.
Probably the safest choice overall when specific colors aren’t at issue, is to print with “natural color” (whitish, rather transparent) PLA filament, which tends to have minimum additives. It also is typically the easiest and most reliable to print with, probably for that same reason.
The finding that there is a “burst” of aerosol emissions when printing begins is particularly annoying, since it’s when printing is getting started that you tend to be most closely inspecting the process looking for early print failures.
So the bottom line is pretty much what you’d expect — breathing the stuff emanating from molten plastic isn’t great for you. Then again, even though it only heated the plastic sheets for a few minutes at a time (as opposed to the hours-long running times of modern 3D printers), I loved my old Mattel “VAC-U-FORM” when I was a kid — and who knows how toxic the plastics heated in that beauty really were (https://www.youtube.com/watch?v=lCvgvWiZNe8). Egads, not only can you still get them on eBay, replacement parts and plastic refill packs are still being sold as well!
I guess that they got it right in the “The Graduate” after all: https://www.youtube.com/watch?v=Dug-G9xVdVs
Be seeing you.
–Lauren–
UPDATE (November 22, 2018): Save Google — but Let Facebook Die
– – –
Google has reached what could very well be an existential moment of truth in its corporate history.
The recent global walkout of Google employees and contractors included more than 20,000 participants by current counts, and the final numbers are almost certain to be even higher. This puts total participation at something north of 20% of the entire firm — a remarkable achievement by the organizers.
Almost a month ago, when I posted my concerns regarding the path that this great company has been taking, and the associated impacts on both their employees and users (“The Death of Google” – https://lauren.vortex.com/2018/10/08/the-death-of-google), the sexual assault and harassment issues that were the proximate trigger for the walkout were not yet known publicly — not even to most Googlers.
These newly reported management failures clearly fit tightly into the same pattern of longstanding issues that I’ve frequently noted, and various broad concerns related to Google’s accountability and transparency that have been cited as additional foundational reasons for the walkout.
Google today — almost exactly twenty years since its founding — is at a crossroads. The decisions that management makes now regarding the issues that drove the walkout and other issues of concern to Googlers, Google’s users, and the world at large, will greatly impact the future success of the firm, or even how long into the future Google will continue to exist in a recognizable form at all.
That so many of these issues have reached the public sphere at around the same time — sexual abuse and harassment, Googlers’ concerns about military contracts and a secret project aimed at providing Chinese-government censored search, and more — should not actually be a surprise.
For all of these matters are symptomatic of larger problematic ethical factors that have crept into Google’s structure, and without a foundational change of direction in this respect, new concerns will inevitably keep arising, and Google will keep lurching from crisis to crisis.
The walkout organizers will reportedly be meeting with Google CEO Sundar Pichai imminently, and I fully endorse the organizers’ publicly stated demands.
But management deeds are needed — not just words. After a demonstration of this nature, it’s all too easy for conciliatory statements to not be followed by concrete and sustained actions, and then for the original status quo to reassert itself over time.
This is also a most appropriate moment for Google to act on a range of systemic factors that have led to transparency, accountability, and other problems associated with Google management’s interactions with rank-and-file employees, and between Google as a whole and its users.
Regarding the latter point, since I’ve many times over the years publicly outlined my thoughts regarding the need for Google employees dedicated to roles such as ombudsperson, user advocates, and ethics officer (call the latter “Guardian of Googleyness” if you prefer), I won’t detail these crucial positions again here now. But as the walkout strongly suggests, these all are more critically needed by Google than ever before, because they all connect back to the basic ethical issues at the core of many concerns regarding Google.
These are all interconnected and interrelated matters, and attempts to improve any of them in isolation from the others will ultimately be like sweeping dirt under the proverbial rug — such problems are pretty much guaranteed to eventually reemerge with even more serious negative consequences down the line.
Google is indeed a great company. No firm can be better than its employees, and Google’s employees — a significant number of whom I know personally — have through their walkout demonstrated to the world something that I already knew about them.
Googlers care deeply about Google. They want it to be the best Google that it possibly can be, and that means meeting high ethical standards vertically, horizontally, and from A to Z.
Now it’s Google’s management’s turn. Can they demonstrate to their employees, to Google’s users, and to the global community, that loyalty towards Google has not been misplaced?
We shall see.
–Lauren–
Did you miss us?
Back in 2019, Drowned in Sound hit pause and archived this site.
In the meantime, our community has remained active here here.
But we've made a return, that's a little different than before.
You can keep up to date with what we're doing by signing up to our free Substack newsletter.
What's New?We're calling a the new era of Drowned in Sound an "audio publication" because it's a magazine-style podcast, with a singles club on our independent label, which acts a bit like a magazine cover mount.
The PodcastOur podcast was originally launched back in 2005 and won a few awards, so it's really exciting to finally reboot it. You'll find season one of the podcast wherever you get your podcasts (Spotify | Apple | Etc.), looking at the foundations and future of the music industry, speaking to a range of experts.
The Independent LabelOur singles club previously launched the careers of a range of artists including Bat for Lashes, Kaiser Chiefs, Emmy the Great, Martha Wainwright and more.
The 2023 edition of the club kicked off with Faith Vern from DiS-favourites PINS, under the guise of The Faux Faux, with the menacing, smokey, 'Cold Hearted Woman'.
On March 9th, the UK's Climate Change Committee concluded that the UK could meet its electricity needs in 2035 and 2050 with a mixture of renewables, nuclear and what it calls 'low-carbon dispatchable generation', plus a small amount of unabated natural gas.[1] It made its positive assessment by studying typical and extreme weather patterns in the past. It then calculated whether the possible portfolio of wind, solar and nuclear envisaged by government targets would generate sufficient power, if combined with some combination of storage, natural gas with CCS, and hydrogen. The conclusion was that the transition is possible, even alongside a 50% rise in electricity consumption by 2035 and a doubling by 2050. But it also said that the current pace of installation was insufficient to meet the targets for decarbonisation.
I wrote some comments in the note below, including a query as to whether the enormous cost (and difficulty) of electricity transmission upgrades is being fully considered. Extra infrastructure may double the cost of the new electricity generation capacity.
***
Many energy commentators reacted with enthusiasm to the CCC's work. The report was seen as a strong signal to the UK government, and to the many sceptics, that a renewables-based system can - very largely - drive unabated natural gas off the country's electricity grid. And it is indeed a very impressive piece of analysis; its workings are detailed without being obscure.
For the first time, the CCC sees a potentially large role for hydrogen as the key balancing energy source in the electricity system. When the wind is blowing hard, surplus power will be sent to electrolysers where it will be turned into hydrogen. The hydrogen will be stored in salt caverns beneath the ground and combusted in gas turbines when the wind is still.
The CCC leaves the precise size of the hydrogen contribution unclear, saying that the cost compared to natural gas with CCS is not certain. So it merges abated gas and hydrogen into 'low carbon dispatchable generation' without being wholly specific about the share of hydrogen. It is also ambivalent about the role of electrolysis versus autothermal reforming of natural gas to make the product.
Nevertheless, for those of us who have been pushing the importance of hydrogen for storage of electricity for several years, this is an important moment. The language of the document is entirely different from a previous 2018 CCC report on hydrogen which concluded
'the low overall efficiency of electrolysis and the relatively high value of using electricity as an input mean that the costs of producing bulk electrolytic hydrogen within the UK are likely to be high.'[2]
That language has now completely changed with an acceptance that the role of hydrogen is to take electricity at times when power prices are very low and store it for periods when prices are high. Table 3.1 in the recent CCC report gives a figure of just £22 per megawatt hour for hydrogen production for 2035 (but in 2012 prices).
The world's most respected climate agency has given the argument for 'renewables plus hydrogen' a good chance to break through into the policy mainstream.
Let's briefly look at the key aspects of the work. This is not to question the main thrust of the conclusions but to examine their implications.
1, The report essentially uses government targets for renewables installations as the basis of its figures. The figures for new capacity are not independently generated.
By 2030, the UK plan is to have more than 50 GW of offshore wind, compared to about 15 GW today. That means an installation rate of about 7 GW a year, a demanding target. But the CCC appears to use the 50 GW 2030 target as its estimate of 2035 capacity.
For onshore wind, the CCC is much less bullish, assuming 28 GW in 2035, up from about 13 GW today. The implied installation rate is less than 2 GW a year. Even this is probably unlikely unless the UK government reverses its effective ban on onshore wind in England and Wales. Scottish development isn't sufficiently fast. Solar rises by about 55 GW before 2035, up from about 15 GW today. The estimates for 2050 requirements are 115 GW offshore, 31 GW onshore and 105 GW of solar.
2, Because this is the stated government view, the critical assumption that the CCC has had to run with is that new nuclear will become an important part of the UK's portfolio. By 2050, the estimate used is for 24 GW of operational nuclear, which will cover about a third of the UK's total electricity need. 10 GW is assumed to be available in 2035.
Is this likely? No, it is not. Sizewell B will probably decommission that year, the last existing nuclear plant in the UK. Hinkley Point C, possibly to be completed mid-decade, has a capacity of 3.2 GW. So two more new nuclear plants will needed in the next twelve years. One has to be a blind optimist to believe that this is possible. And, of course, the price will probably be more than twice than that for wind or solar.
The likelihood is that the UK will construct no more than a couple of new nuclear plants. The key implication is therefore that we will need more wind and more solar than the CCC says. Very roughly, the likelihood is that instead of 115 GW of offshore wind, about 150 GW will be needed. The key effect on the CCC's arguments is that because wind is variable, the amount of hydrogen capacity needed for storage will be much more than they predict.
3, The CCC does not extensively deal with the interaction between the EU energy markets and those of Great Britain. This is important, but is unfortunately typical of most official documents across energy and other policy areas. Yes, there is mention of electricity interconnectors but it seems to be assumed that EU countries will accept a large portion of GB surpluses. This is unlikely because at times of UK high winds, most of northern Europe will be similarly harvesting overwhelming yields of electricity. As I write this on Monday 13th February, the UK's wind is providing over 20 GW of power, while in Denmark turbines are currently giving more to the Danish grid than the entire national electricity consumption.
The modelling behind the CCC work appears not consider this problem, perhaps because of an assumption that other countries will not invest as extensively in new wind capacity. But, for example, the Netherlands government has a target of 70 GW of offshore wind by 2050, a figure that would provide almost three times the current Dutch power needs over the course of the year. Netherlands producers will be wanted to export at exactly the same time as UK wind farms. As far as I could see, there was no mention of this in the entire CCC report. Other northern European countries also have major (and well-documented) strategies to expand offshore wind.
4, There are similar problems with the discussion of hydrogen interconnectivity. Although there is mention of pipelines in the CCC report, there seems to be an absence of consideration of the effect of the development of a full European hydrogen network. Moving energy around in pipelines, even over several thousand kilometres, is very much cheaper than using electricity networks. (There is a good reason why UK domestic electricity bills have a charge of transmission and distribution that is four times the fee for gas per kilowatt hour!). If we need more hydrogen, the cheapest way to get it will probably be through the proposed EU pipeline system, probably partly fed from northern European wind and hydro and north African solar. The UK energy system - both electricity and hydrogen - will almost certainly be very much more tightly integrated with Europe than the CCC suggests. Once again, one presumes this is because wider UK government policy does not want to acknowledge the utterly central role of links to the mainland (and Ireland) in our energy policy.
5, The CCC mentions extensively, but does not fully quantify, the striking requirements that the UK has to improve its electricity transmission and distribution systems in order to make the 'renewables plus hydrogen' transition possible. This issue needs urgently to be bought to the forefront of our discussions. As with many other European countries, the development of new electricity resources, and the electrification of heating and transport, is being impeded by the lack of capacity in both high voltage and low voltage segments. The unrecognised reality is that upgrading our infrastructure may cost as much as the whole of the extra renewables installations in the period to 2050. And, unfortunately, much of the required investment will have to be made well before the new capacity comes on line. Which will mean businesses and families paying for the full transition soon, and before the majority of the benefits show. By the way, this problem affects most other advanced countries as well.
The tables below show some indication of the scale of the challenge facing supporting infrastructure by comparing the cost of new renewables to the cost of new electricity transmission and distribution. I have had to use estimates for many of these calculations but I think they are broadly correct. My logic is in the appendix below.
The cost of new renewables to 2050
a) Offshore wind at £1.5billion a gigawatt = £150bn
b) Onshore wind at £1 billion a gigawatt = £18bn
c) Solar at £0.5 billion a gigawatt = £45bn
Total = £213bn
The cost of supporting infrastructure
a) Distribution networks (i.e. DNO spend) = £60-180bn (source page 66 CCC report)
b) Transmission network (i.e. ESO spend) =
· Offshore wind at £0.6 billion per gigawatt = £60 bn
· Onshore wind at £0.3 billion per gigawatt = £5.4bn
· Solar at £0.2 billion per gigawatt = £18bn
Total = £143.4 - £263.1bn
Under some projections, the cost of infrastructure upgrades to allow full use of renewables will therefore exceed the cost of the installations themselves. The unfortunate implication is that the Levelised Cost of Renewables infrastructure may approximately double the cost of the electricity produced by these installations. That is a very tough conclusion for those of us who want a rapid transition.
Appendix.
The cost of renewables is taken from recent figures published about very large scale projects, slightly reduced to take into account likely cost cuts over the next decade.
The cost of distribution (essentially the low voltage networks that take power to buildings) is taken from the CCC report.
The cost of transmission infrastructure is calculated from a recent Ofgem estimate that the cost onshore of putting 50 GW offshore in place by 2030 is about £21bn. See the summary by lawyers CMS at https://cms-lawnow.com/en/ealerts/2023/01/accelerating-onshore-electricity-transmission-investment-a-step-forward-for-low-carbon-generation.
Onshore wind and solar will require less investment in transmission per gigawatt. Many schemes will actually connect to the distribution system, not the National Grid. I have roughly estimated a figure of £0.3bn per gigawatt for wind and £0.2 per gigawatt for solar based on the offshore numbers.
[1] https://www.theccc.org.uk/2023/03/09/a-reliable-secure-and-decarbonised-power-system-by-2035-is-possible-but-not-at-this-pace-of-delivery/
[2] https://www.theccc.org.uk/publication/hydrogen-in-a-low-carbon-economy/
People in Denmark living close to a new wind turbine or solar farm receive a yearly payment. In most cases, the amount corresponds to the value of the output of 6.5 kilowatts from the new renewable generator. I think the UK should consider a similar scheme here, but probably more generous - to increase local support for wind and solar power. We need to rapidly unlock the exploitation of the country's fabulous coastal (and some inland) wind resources.
Details of the Danish scheme (full text at bottom of page)
· Any household living within 8 times the height of a wind turbine or 200 metres from the nearest solar panel is eligible. The average new onshore turbine (measured to the highest tip) is likely to be around 100 metres, implying a qualifying distance of up to 800 metres.
· Each household is awarded the value of the output of 6.5 kilowatts both for solar and wind.
· The maximum proportion of the output of the wind or solar site that can be paid to local householders is 1.5% of the total. In the event that 6.5 kilowatts multiplied by the number of eligible homes would exceed 1.5%, the value of the payment is cut proportionately so that the total does not go over this limit.
Householders have to apply for the payment.
What would be the implications of this scheme if used in the UK?
· 6.5 kilowatts of wind power is likely to produce about 19 megawatt hours a year on a reasonable site close to a coast. (33% capacity factor assumed). At a value of £60 per megawatt hour, the payment would be about £1,100 per year.
· 6.5 kilowatts of solar power should achieve slightly more than 6 megawatt hours a year on the coasts or in the southern part of England. (11% capacity factor assumed). This would generate a payment of about £375.
Of course, the numbers in March 2023 would be much larger because of the unusually high prices for wholesale electricity. They might be double these levels.
Would the cap of 1.5% of output typically come into play?
· A new onshore turbine installed in 2023 might have an maximum output of 4 megawatts. Therefore the 1.5% maximum would be 60 kilowatts, meaning only 9 households could benefit before the annual payment was scaled back.
· A solar farm typically could have a capacity of 10 megawatts. This would allow 23 households to benefit before proportional cuts were made.
What changes might make this work for the UK?
We know there is broad support for wind and solar, even if it is developed in the immediate proximity. The UK government has just published its latest opinion survey on the topic.[1] This shows that only 12% would be unhappy about a wind farm in their local area and 7% similarly opposed to solar. (However these figures may be slightly inaccurate because some respondents said that a solar or wind farm would be impossible in their area and therefore didn't say whether they opposed them or not). For comparison, only 4% of people are generally unhappy with solar, wherever it is sited, and 11% oppose wind.
My guess - and of course it is only a guess - is that the payments might need to rise to a maximum of 3% of the revenue of the renewable site and payments be made corresponding to up to 10 kW of capacity. This could extend up to 1km from a turbine and 300 metres from a solar farm.
This would mean that a home in a wind turbine's area might get £1,700 a year at a 'normal' wholesale electricity price of £60. That would be greater than the typical electricity bill. Solar would provide a fee of just under £600.
Perhaps these bonuses would help bring local communities behind new renewables developments. And allow elected politicians to actively support them, rather than almost universally oppose them for fear of the consequences at the next polling date. It might unlock the London's government's almost total ban on new English wind.
Most importantly, it would give local people a sense of de facto ownership of the asset. In my experience, nothing promotes wind or solar better than the feeling that every time the sun comes from behind the clouds, or the turbine spins once, a small amount of money has been earned.
[1]https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1140685/BEIS_PAT_Winter_2022_Energy_Infrastructure_and_Energy_Sources.pdf
I am very grateful to my daughter Ursula Brewer, operations manager at solar developer Better Energy in Copenhagen, for informing me about the Danish scheme.

As a UX professional in today's data-driven landscape, it's increasingly likely that you've been asked to design a personalized digital experience, whether it's a public website, user portal, or native application. Yet while there continues to be no shortage of marketing hype around personalization platforms, we still have very few standardized approaches for implementing personalized UX.
That's where we come in. After completing dozens of personalization projects over the past few years, we gave ourselves a goal: could you create a holistic personalization framework specifically for UX practitioners? The Personalization Pyramid is a designer-centric model for standing up human-centered personalization programs, spanning data, segmentation, content delivery, and overall goals. By using this approach, you will be able to understand the core components of a contemporary, UX-driven personalization program (or at the very least know enough to get started).

Growing tools for personalization: According to a Dynamic Yield survey, 39% of respondents felt support is available on-demand when a business case is made for it (up 15% from 2020).
Source: "The State of Personalization Maturity - Q4 2021" Dynamic Yield conducted its annual maturity survey across roles and sectors in the Americas (AMER), Europe and the Middle East (EMEA), and the Asia-Pacific (APAC) regions. This marks the fourth consecutive year publishing our research, which includes more than 450 responses from individuals in the C-Suite, Marketing, Merchandising, CX, Product, and IT.
Getting StartedFor the sake of this article, we'll assume you're already familiar with the basics of digital personalization. A good overview can be found here: Website Personalization Planning. While UX projects in this area can take on many different forms, they often stem from similar starting points.
Common scenarios for starting a personalization project:
- Your organization or client purchased a content management system (CMS) or marketing automation platform (MAP) or related technology that supports personalization
- The CMO, CDO, or CIO has identified personalization as a goal
- Customer data is disjointed or ambiguous
- You are running some isolated targeting campaigns or A/B testing
- Stakeholders disagree on personalization approach
- Mandate of customer privacy rules (e.g. GDPR) requires revisiting existing user targeting practices
Workshopping personalization at a conference.
Regardless of where you begin, a successful personalization program will require the same core building blocks. We've captured these as the "levels" on the pyramid. Whether you are a UX designer, researcher, or strategist, understanding the core components can help make your contribution successful.
From the ground up: Soup-to-nuts personalization, without going nuts.
From top to bottom, the levels include:
- North Star: What larger strategic objective is driving the personalization program?
- Goals: What are the specific, measurable outcomes of the program?
- Touchpoints: Where will the personalized experience be served?
- Contexts and Campaigns: What personalization content will the user see?
- User Segments: What constitutes a unique, usable audience?
- Actionable Data: What reliable and authoritative data is captured by our technical platform to drive personalization?
- Raw Data: What wider set of data is conceivably available (already in our setting) allowing you to personalize?
We'll go through each of these levels in turn. To help make this actionable, we created an accompanying deck of cards to illustrate specific examples from each level. We've found them helpful in personalization brainstorming sessions, and will include examples for you here.
Personalization pack: Deck of cards to help kickstart your personalization brainstorming.
Starting at the Top
The components of the pyramid are as follows:
North StarA north star is what you are aiming for overall with your personalization program (big or small). The North Star defines the (one) overall mission of the personalization program. What do you wish to accomplish? North Stars cast a shadow. The bigger the star, the bigger the shadow. Example of North Starts might include:
- Function: Personalize based on basic user inputs. Examples: "Raw" notifications, basic search results, system user settings and configuration options, general customization, basic optimizations
- Feature: Self-contained personalization componentry. Examples: "Cooked" notifications, advanced optimizations (geolocation), basic dynamic messaging, customized modules, automations, recommenders
- Experience: Personalized user experiences across multiple interactions and user flows. Examples: Email campaigns, landing pages, advanced messaging (i.e. C2C chat) or conversational interfaces, larger user flows and content-intensive optimizations (localization).
- Product: Highly differentiating personalized product experiences. Examples: Standalone, branded experiences with personalization at their core, like the "algotorial" playlists by Spotify such as Discover Weekly.
North star cards. These can help orient your team towards a common goal that personalization will help achieve; Also, these are useful for characterizing the end-state ambition of the presently stated personalization effort.
Goals
As in any good UX design, personalization can help accelerate designing with customer intentions. Goals are the tactical and measurable metrics that will prove the overall program is successful. A good place to start is with your current analytics and measurement program and metrics you can benchmark against. In some cases, new goals may be appropriate. The key thing to remember is that personalization itself is not a goal, rather it is a means to an end. Common goals include:
- Conversion
- Time on task
- Net promoter score (NPS)
- Customer satisfaction
Goal cards. Examples of some common KPIs related to personalization that are concrete and measurable.
Touchpoints
Touchpoints are where the personalization happens. As a UX designer, this will be one of your largest areas of responsibility. The touchpoints available to you will depend on how your personalization and associated technology capabilities are instrumented, and should be rooted in improving a user's experience at a particular point in the journey. Touchpoints can be multi-device (mobile, in-store, website) but also more granular (web banner, web pop-up etc.). Here are some examples:
Channel-level Touchpoints
- Email: Role
- Email: Time of open
- In-store display (JSON endpoint)
- Native app
- Search
Wireframe-level Touchpoints
- Web overlay
- Web alert bar
- Web banner
- Web content block
- Web menu
Touchpoint cards. Examples of common personalization touchpoints: these can vary from narrow (e.g., email) to broad (e.g., in-store).
If you're designing for web interfaces, for example, you will likely need to include personalized "zones" in your wireframes. The content for these can be presented programmatically in touchpoints based on our next step, contexts and campaigns.
Targeted Zones: Examples from Kibo of personalized "zones" on page-level wireframes occurring at various stages of a user journey (Engagement phase at left and Purchase phase at right.)Source: "Essential Guide to End-to-End Personaliztion" by Kibo. Contexts and Campaigns
Once you've outlined some touchpoints, you can consider the actual personalized content a user will receive. Many personalization tools will refer to these as "campaigns" (so, for example, a campaign on a web banner for new visitors to the website). These will programmatically be shown at certain touchpoints to certain user segments, as defined by user data. At this stage, we find it helpful to consider two separate models: a context model and a content model. The context helps you consider the level of engagement of the user at the personalization moment, for example a user casually browsing information vs. doing a deep-dive. Think of it in terms of information retrieval behaviors. The content model can then help you determine what type of personalization to serve based on the context (for example, an "Enrich" campaign that shows related articles may be a suitable supplement to extant content).
Personalization Context Model:
- Browse
- Skim
- Nudge
- Feast
Personalization Content Model:
- Alert
- Make Easier
- Cross-Sell
- Enrich
We've written extensively about each of these models elsewhere, so if you'd like to read more you can check out Colin's Personalization Content Model and Jeff's Personalization Context Model.
Campaign and Context cards: This level of the pyramid can help your team focus around the types of personalization to deliver end users and the use-cases in which they will experience it.
User Segments
User segments can be created prescriptively or adaptively, based on user research (e.g. via rules and logic tied to set user behaviors or via A/B testing). At a minimum you will likely need to consider how to treat the unknown or first-time visitor, the guest or returning visitor for whom you may have a stateful cookie (or equivalent post-cookie identifier), or the authenticated visitor who is logged in. Here are some examples from the personalization pyramid:
- Unknown
- Guest
- Authenticated
- Default
- Referred
- Role
- Cohort
- Unique ID
Segment cards. Examples of common personalization segments: at a minimum, you will need to consider the anonymous, guest, and logged in user types. Segmentation can get dramatically more complex from there.
Actionable Data
Every organization with any digital presence has data. It's a matter of asking what data you can ethically collect on users, its inherent reliability and value, as to how can you use it (sometimes known as "data activation.") Fortunately, the tide is turning to first-party data: a recent study by Twilio estimates some 80% of businesses are using at least some type of first-party data to personalize the customer experience.
Source: "The State of Personalization 2021" by Twilio. Survey respondents were n=2,700 adult consumers who have purchased something online in the past 6 months, and n=300 adult manager+ decision-makers at consumer-facing companies that provide goods and/or services online. Respondents were from the United States, United Kingdom, Australia, and New Zealand.Data was collected from April 8 to April 20, 2021.
First-party data represents multiple advantages on the UX front, including being relatively simple to collect, more likely to be accurate, and less susceptible to the "creep factor" of third-party data. So a key part of your UX strategy should be to determine what the best form of data collection is on your audiences. Here are some examples:
Figure 1.1.2: Example of a personalization maturity curve, showing progression from basic recommendations functionality to true individualization. Credit: https://kibocommerce.com/blog/kibos-personalization-maturity-chart/
There is a progression of profiling when it comes to recognizing and making decisioning about different audiences and their signals. It tends to move towards more granular constructs about smaller and smaller cohorts of users as time and confidence and data volume grow.
While some combination of implicit / explicit data is generally a prerequisite for any implementation (more commonly referred to as first party and third-party data) ML efforts are typically not cost-effective directly out of the box. This is because a strong data backbone and content repository is a prerequisite for optimization. But these approaches should be considered as part of the larger roadmap and may indeed help accelerate the organization's overall progress. Typically at this point you will partner with key stakeholders and product owners to design a profiling model. The profiling model includes defining approach to configuring profiles, profile keys, profile cards and pattern cards. A multi-faceted approach to profiling which makes it scalable.
Pulling it TogetherWhile the cards comprise the starting point to an inventory of sorts (we provide blanks for you to tailor your own), a set of potential levers and motivations for the style of personalization activities you aspire to deliver, they are more valuable when thought of in a grouping.
In assembling a card "hand", one can begin to trace the entire trajectory from leadership focus down through a strategic and tactical execution. It is also at the heart of the way both co-authors have conducted workshops in assembling a program backlog—which is a fine subject for another article.
In the meantime, what is important to note is that each colored class of card is helpful to survey in understanding the range of choices potentially at your disposal, it is threading through and making concrete decisions about for whom this decisioning will be made: where, when, and how.
Scenario A: We want to use personalization to improve customer satisfaction on the website. For unknown users, we will create a short quiz to better identify what the user has come to do. This is sometimes referred to as "badging" a user in onboarding contexts, to better characterize their present intent and context.
Lay Down Your Cards
Any sustainable personalization strategy must consider near, mid and long-term goals. Even with the leading CMS platforms like Sitecore and Adobe or the most exciting composable CMS DXP out there, there is simply no "easy button" wherein a personalization program can be stood up and immediately view meaningful results. That said, there is a common grammar to all personalization activities, just like every sentence has nouns and verbs. These cards attempt to map that territory.
The mobile-first design methodology is great—it focuses on what really matters to the user, it's well-practiced, and it's been a common design pattern for years. So developing your CSS mobile-first should also be great, too…right?
Well, not necessarily. Classic mobile-first CSS development is based on the principle of overwriting style declarations: you begin your CSS with default style declarations, and overwrite and/or add new styles as you add breakpoints with min-width media queries for larger viewports (for a good overview see "What is Mobile First CSS and Why Does It Rock?"). But all those exceptions create complexity and inefficiency, which in turn can lead to an increased testing effort and a code base that's harder to maintain. Admit it—how many of us willingly want that?
On your own projects, mobile-first CSS may yet be the best tool for the job, but first you need to evaluate just how appropriate it is in light of the visual design and user interactions you're working on. To help you get started, here's how I go about tackling the factors you need to watch for, and I'll discuss some alternate solutions if mobile-first doesn't seem to suit your project.
Advantages of mobile-firstSome of the things to like with mobile-first CSS development—and why it's been the de facto development methodology for so long—make a lot of sense:
Development hierarchy. One thing you undoubtedly get from mobile-first is a nice development hierarchy—you just focus on the mobile view and get developing.
Tried and tested. It's a tried and tested methodology that's worked for years for a reason: it solves a problem really well.
Prioritizes the mobile view. The mobile view is the simplest and arguably the most important, as it encompasses all the key user journeys, and often accounts for a higher proportion of user visits (depending on the project).
Prevents desktop-centric development. As development is done using desktop computers, it can be tempting to initially focus on the desktop view. But thinking about mobile from the start prevents us from getting stuck later on; no one wants to spend their time retrofitting a desktop-centric site to work on mobile devices!
Disadvantages of mobile-firstSetting style declarations and then overwriting them at higher breakpoints can lead to undesirable ramifications:
More complexity. The farther up the breakpoint hierarchy you go, the more unnecessary code you inherit from lower breakpoints.
Higher CSS specificity. Styles that have been reverted to their browser default value in a class name declaration now have a higher specificity. This can be a headache on large projects when you want to keep the CSS selectors as simple as possible.
Requires more regression testing. Changes to the CSS at a lower view (like adding a new style) requires all higher breakpoints to be regression tested.
The browser can't prioritize CSS downloads. At wider breakpoints, classic mobile-first min-width media queries don't leverage the browser's capability to download CSS files in priority order.
The problem of property value overridesThere is nothing inherently wrong with overwriting values; CSS was designed to do just that. Still, inheriting incorrect values is unhelpful and can be burdensome and inefficient. It can also lead to increased style specificity when you have to overwrite styles to reset them back to their defaults, something that may cause issues later on, especially if you are using a combination of bespoke CSS and utility classes. We won't be able to use a utility class for a style that has been reset with a higher specificity.
With this in mind, I'm developing CSS with a focus on the default values much more these days. Since there's no specific order, and no chains of specific values to keep track of, this frees me to develop breakpoints simultaneously. I concentrate on finding common styles and isolating the specific exceptions in closed media query ranges (that is, any range with a max-width set).
This approach opens up some opportunities, as you can look at each breakpoint as a clean slate. If a component's layout looks like it should be based on Flexbox at all breakpoints, it's fine and can be coded in the default style sheet. But if it looks like Grid would be much better for large screens and Flexbox for mobile, these can both be done entirely independently when the CSS is put into closed media query ranges. Also, developing simultaneously requires you to have a good understanding of any given component in all breakpoints up front. This can help surface issues in the design earlier in the development process. We don't want to get stuck down a rabbit hole building a complex component for mobile, and then get the designs for desktop and find they are equally complex and incompatible with the HTML we created for the mobile view!
Though this approach isn't going to suit everyone, I encourage you to give it a try. There are plenty of tools out there to help with concurrent development, such as Responsively App, Blisk, and many others.
Having said that, I don't feel the order itself is particularly relevant. If you are comfortable with focusing on the mobile view, have a good understanding of the requirements for other breakpoints, and prefer to work on one device at a time, then by all means stick with the classic development order. The important thing is to identify common styles and exceptions so you can put them in the relevant stylesheet—a sort of manual tree-shaking process! Personally, I find this a little easier when working on a component across breakpoints, but that's by no means a requirement.
Closed media query ranges in practiceIn classic mobile-first CSS we overwrite the styles, but we can avoid this by using media query ranges. To illustrate the difference (I'm using SCSS for brevity), let's assume there are three visual designs:
- smaller than 768
- from 768 to below 1024
- 1024 and anything larger
Take a simple example where a block-level element has a default padding of "20px," which is overwritten at tablet to be "40px" and set back to "20px" on desktop.
Classic min-width mobile-first
.my-block {
padding: 20px;
@media (min-width: 768px) {
padding: 40px;
}
@media (min-width: 1024px) {
padding: 20px;
}
}
Closed media query range
.my-block {
padding: 20px;
@media (min-width: 768px) and (max-width: 1023.98px) {
padding: 40px;
}
}
The subtle difference is that the mobile-first example sets the default padding to "20px" and then overwrites it at each breakpoint, setting it three times in total. In contrast, the second example sets the default padding to "20px" and only overrides it at the relevant breakpoint where it isn't the default value (in this instance, tablet is the exception).
The goal is to:
- Only set styles when needed.
- Not set them with the expectation of overwriting them later on, again and again.
To this end, closed media query ranges are our best friend. If we need to make a change to any given view, we make it in the CSS media query range that applies to the specific breakpoint. We'll be much less likely to introduce unwanted alterations, and our regression testing only needs to focus on the breakpoint we have actually edited.
Taking the above example, if we find that .my-block spacing on desktop is already accounted for by the margin at that breakpoint, and since we want to remove the padding altogether, we could do this by setting the mobile padding in a closed media query range.
.my-block {
@media (max-width: 767.98px) {
padding: 20px;
}
@media (min-width: 768px) and (max-width: 1023.98px) {
padding: 40px;
}
}
The browser default padding for our block is "0," so instead of adding a desktop media query and using unset or "0" for the padding value (which we would need with mobile-first), we can wrap the mobile padding in a closed media query (since it is now also an exception) so it won't get picked up at wider breakpoints. At the desktop breakpoint, we won't need to set any padding style, as we want the browser default value.
Bundling versus separating the CSSBack in the day, keeping the number of requests to a minimum was very important due to the browser's limit of concurrent requests (typically around six). As a consequence, the use of image sprites and CSS bundling was the norm, with all the CSS being downloaded in one go, as one stylesheet with highest priority.
With HTTP/2 and HTTP/3 now on the scene, the number of requests is no longer the big deal it used to be. This allows us to separate the CSS into multiple files by media query. The clear benefit of this is the browser can now request the CSS it currently needs with a higher priority than the CSS it doesn't. This is more performant and can reduce the overall time page rendering is blocked.
Which HTTP version are you using?To determine which version of HTTP you're using, go to your website and open your browser's dev tools. Next, select the Network tab and make sure the Protocol column is visible. If "h2" is listed under Protocol, it means HTTP/2 is being used.
Note: to view the Protocol in your browser's dev tools, go to the Network tab, reload your page, right-click any column header (e.g., Name), and check the Protocol column.
Also, if your site is still using HTTP/1...WHY?!! What are you waiting for? There is excellent user support for HTTP/2.
Splitting the CSSSeparating the CSS into individual files is a worthwhile task. Linking the separate CSS files using the relevant media attribute allows the browser to identify which files are needed immediately (because they're render-blocking) and which can be deferred. Based on this, it allocates each file an appropriate priority.
In the following example of a website visited on a mobile breakpoint, we can see the mobile and default CSS are loaded with "Highest" priority, as they are currently needed to render the page. The remaining CSS files (print, tablet, and desktop) are still downloaded in case they'll be needed later, but with "Lowest" priority.
With bundled CSS, the browser will have to download the CSS file and parse it before rendering can start.
While, as noted, with the CSS separated into different files linked and marked up with the relevant media attribute, the browser can prioritize the files it currently needs. Using closed media query ranges allows the browser to do this at all widths, as opposed to classic mobile-first min-width queries, where the desktop browser would have to download all the CSS with Highest priority. We can't assume that desktop users always have a fast connection. For instance, in many rural areas, internet connection speeds are still slow.
The media queries and number of separate CSS files will vary from project to project based on project requirements, but might look similar to the example below.
Bundled CSS
<link href="site.css" rel="stylesheet">This single file contains all the CSS, including all media queries, and it will be downloaded with Highest priority.
Separated CSS
<link href="default.css" rel="stylesheet"><link href="mobile.css" media="screen and (max-width: 767.98px)" rel="stylesheet"><link href="tablet.css" media="screen and (min-width: 768px) and (max-width: 1083.98px)" rel="stylesheet"><link href="desktop.css" media="screen and (min-width: 1084px)" rel="stylesheet"><link href="print.css" media="print" rel="stylesheet">Separating the CSS and specifying a media attribute value on each link tag allows the browser to prioritize what it currently needs. Out of the five files listed above, two will be downloaded with Highest priority: the default file, and the file that matches the current media query. The others will be downloaded with Lowest priority.
Depending on the project's deployment strategy, a change to one file (mobile.css, for example) would only require the QA team to regression test on devices in that specific media query range. Compare that to the prospect of deploying the single bundled site.css file, an approach that would normally trigger a full regression test.
Moving onThe uptake of mobile-first CSS was a really important milestone in web development; it has helped front-end developers focus on mobile web applications, rather than developing sites on desktop and then attempting to retrofit them to work on other devices.
I don't think anyone wants to return to that development model again, but it's important we don't lose sight of the issue it highlighted: that things can easily get convoluted and less efficient if we prioritize one particular device—any device—over others. For this reason, focusing on the CSS in its own right, always mindful of what is the default setting and what's an exception, seems like the natural next step. I've started noticing small simplifications in my own CSS, as well as other developers', and that testing and maintenance work is also a bit more simplified and productive.
In general, simplifying CSS rule creation whenever we can is ultimately a cleaner approach than going around in circles of overrides. But whichever methodology you choose, it needs to suit the project. Mobile-first may—or may not—turn out to be the best choice for what's involved, but first you need to solidly understand the trade-offs you're stepping into.
About two and a half years ago, I introduced the idea of daily ethical design. It was born out of my frustration with the many obstacles to achieving design that's usable and equitable; protects people's privacy, agency, and focus; benefits society; and restores nature. I argued that we need to overcome the inconveniences that prevent us from acting ethically and that we need to elevate design ethics to a more practical level by structurally integrating it into our daily work, processes, and tools.
Unfortunately, we're still very far from this ideal.
At the time, I didn't know yet how to structurally integrate ethics. Yes, I had found some tools that had worked for me in previous projects, such as using checklists, assumption tracking, and "dark reality" sessions, but I didn't manage to apply those in every project. I was still struggling for time and support, and at best I had only partially achieved a higher (moral) quality of design—which is far from my definition of structurally integrated.
I decided to dig deeper for the root causes in business that prevent us from practicing daily ethical design. Now, after much research and experimentation, I believe that I've found the key that will let us structurally integrate ethics. And it's surprisingly simple! But first we need to zoom out to get a better understanding of what we're up against.
Influence the systemSadly, we're trapped in a capitalistic system that reinforces consumerism and inequality, and it's obsessed with the fantasy of endless growth. Sea levels, temperatures, and our demand for energy continue to rise unchallenged, while the gap between rich and poor continues to widen. Shareholders expect ever-higher returns on their investments, and companies feel forced to set short-term objectives that reflect this. Over the last decades, those objectives have twisted our well-intended human-centered mindset into a powerful machine that promotes ever-higher levels of consumption. When we're working for an organization that pursues "double-digit growth" or "aggressive sales targets" (which is 99 percent of us), that's very hard to resist while remaining human friendly. Even with our best intentions, and even though we like to say that we create solutions for people, we're a part of the problem.
What can we do to change this?
We can start by acting on the right level of the system. Donella H. Meadows, a system thinker, once listed ways to influence a system in order of effectiveness. When you apply these to design, you get:
- At the lowest level of effectiveness, you can affect numbers such as usability scores or the number of design critiques. But none of that will change the direction of a company.
- Similarly, affecting buffers (such as team budgets), stocks (such as the number of designers), flows (such as the number of new hires), and delays (such as the time that it takes to hear about the effect of design) won't significantly affect a company.
- Focusing instead on feedback loops such as management control, employee recognition, or design-system investments can help a company become better at achieving its objectives. But that doesn't change the objectives themselves, which means that the organization will still work against your ethical-design ideals.
- The next level, information flows, is what most ethical-design initiatives focus on now: the exchange of ethical methods, toolkits, articles, conferences, workshops, and so on. This is also where ethical design has remained mostly theoretical. We've been focusing on the wrong level of the system all this time.
- Take rules, for example—they beat knowledge every time. There can be widely accepted rules, such as how finance works, or a scrum team's definition of done. But ethical design can also be smothered by unofficial rules meant to maintain profits, often revealed through comments such as "the client didn't ask for it" or "don't make it too big."
- Changing the rules without holding official power is very hard. That's why the next level is so influential: self-organization. Experimentation, bottom-up initiatives, passion projects, self-steering teams—all of these are examples of self-organization that improve the resilience and creativity of a company. It's exactly this diversity of viewpoints that's needed to structurally tackle big systemic issues like consumerism, wealth inequality, and climate change.
- Yet even stronger than self-organization are objectives and metrics. Our companies want to make more money, which means that everything and everyone in the company does their best to… make the company more money. And once I realized that profit is nothing more than a measurement, I understood how crucial a very specific, defined metric can be toward pushing a company in a certain direction.
The takeaway? If we truly want to incorporate ethics into our daily design practice, we must first change the measurable objectives of the company we work for, from the bottom up.
Redefine successTraditionally, we consider a product or service successful if it's desirable to humans, technologically feasible, and financially viable. You tend to see these represented as equals; if you type the three words in a search engine, you'll find diagrams of three equally sized, evenly arranged circles.
But in our hearts, we all know that the three dimensions aren't equally weighted: it's viability that ultimately controls whether a product will go live. So a more realistic representation might look like this:
Desirability and feasibility are the means; viability is the goal. Companies—outside of nonprofits and charities—exist to make money.
A genuinely purpose-driven company would try to reverse this dynamic: it would recognize finance for what it was intended for: a means. So both feasibility and viability are means to achieve what the company set out to achieve. It makes intuitive sense: to achieve most anything, you need resources, people, and money. (Fun fact: the Italian language knows no difference between feasibility and viability; both are simply fattibilità.)
But simply swapping viable for desirable isn't enough to achieve an ethical outcome. Desirability is still linked to consumerism because the associated activities aim to identify what people want—whether it's good for them or not. Desirability objectives, such as user satisfaction or conversion, don't consider whether a product is healthy for people. They don't prevent us from creating products that distract or manipulate people or stop us from contributing to society's wealth inequality. They're unsuitable for establishing a healthy balance with nature.
There's a fourth dimension of success that's missing: our designs also need to be ethical in the effect that they have on the world.
This is hardly a new idea. Many similar models exist, some calling the fourth dimension accountability, integrity, or responsibility. What I've never seen before, however, is the necessary step that comes after: to influence the system as designers and to make ethical design more practical, we must create objectives for ethical design that are achievable and inspirational. There's no one way to do this because it highly depends on your culture, values, and industry. But I'll give you the version that I developed with a group of colleagues at a design agency. Consider it a template to get started.
Pursue well-being, equity, and sustainabilityWe created objectives that address design's effect on three levels: individual, societal, and global.
An objective on the individual level tells us what success is beyond the typical focus of usability and satisfaction—instead considering matters such as how much time and attention is required from users. We pursued well-being:
We create products and services that allow for people's health and happiness. Our solutions are calm, transparent, nonaddictive, and nonmisleading. We respect our users' time, attention, and privacy, and help them make healthy and respectful choices.
An objective on the societal level forces us to consider our impact beyond just the user, widening our attention to the economy, communities, and other indirect stakeholders. We called this objective equity:
We create products and services that have a positive social impact. We consider economic equality, racial justice, and the inclusivity and diversity of people as teams, users, and customer segments. We listen to local culture, communities, and those we affect.
Finally, the objective on the global level aims to ensure that we remain in balance with the only home we have as humanity. Referring to it simply as sustainability, our definition was:
We create products and services that reward sufficiency and reusability. Our solutions support the circular economy: we create value from waste, repurpose products, and prioritize sustainable choices. We deliver functionality instead of ownership, and we limit energy use.
In short, ethical design (to us) meant achieving wellbeing for each user and an equitable value distribution within society through a design that can be sustained by our living planet. When we introduced these objectives in the company, for many colleagues, design ethics and responsible design suddenly became tangible and achievable through practical—and even familiar—actions.
Measure impactBut defining these objectives still isn't enough. What truly caught the attention of senior management was the fact that we created a way to measure every design project's well-being, equity, and sustainability.
This overview lists example metrics that you can use as you pursue well-being, equity, and sustainability:
There's a lot of power in measurement. As the saying goes, what gets measured gets done. Donella Meadows once shared this example:
"If the desired system state is national security, and that is defined as the amount of money spent on the military, the system will produce military spending. It may or may not produce national security."
This phenomenon explains why desirability is a poor indicator of success: it's typically defined as the increase in customer satisfaction, session length, frequency of use, conversion rate, churn rate, download rate, and so on. But none of these metrics increase the health of people, communities, or ecosystems. What if instead we measured success through metrics for (digital) well-being, such as (reduced) screen time or software energy consumption?
There's another important message here. Even if we set an objective to build a calm interface, if we were to choose the wrong metric for calmness—say, the number of interface elements—we could still end up with a screen that induces anxiety. Choosing the wrong metric can completely undo good intentions.
Additionally, choosing the right metric is enormously helpful in focusing the design team. Once you go through the exercise of choosing metrics for our objectives, you're forced to consider what success looks like concretely and how you can prove that you've reached your ethical objectives. It also forces you to consider what we as designers have control over: what can I include in my design or change in my process that will lead to the right type of success? The answer to this question brings a lot of clarity and focus.
And finally, it's good to remember that traditional businesses run on measurements, and managers love to spend much time discussing charts (ideally hockey-stick shaped)—especially if they concern profit, the one-above-all of metrics. For good or ill, to improve the system, to have a serious discussion about ethical design with managers, we'll need to speak that business language.
Practice daily ethical designOnce you've defined your objectives and you have a reasonable idea of the potential metrics for your design project, only then do you have a chance to structurally practice ethical design. It "simply" becomes a matter of using your creativity and choosing from all the knowledge and toolkits already available to you.
I think this is quite exciting! It opens a whole new set of challenges and considerations for the design process. Should you go with that energy-consuming video or would a simple illustration be enough? Which typeface is the most calm and inclusive? Which new tools and methods do you use? When is the website's end of life? How can you provide the same service while requiring less attention from users? How do you make sure that those who are affected by decisions are there when those decisions are made? How can you measure our effects?
The redefinition of success will completely change what it means to do good design.
There is, however, a final piece of the puzzle that's missing: convincing your client, product owner, or manager to be mindful of well-being, equity, and sustainability. For this, it's essential to engage stakeholders in a dedicated kickoff session.
Kick it off or fall back to status quoThe kickoff is the most important meeting that can be so easy to forget to include. It consists of two major phases: 1) the alignment of expectations, and 2) the definition of success.
In the first phase, the entire (design) team goes over the project brief and meets with all the relevant stakeholders. Everyone gets to know one another and express their expectations on the outcome and their contributions to achieving it. Assumptions are raised and discussed. The aim is to get on the same level of understanding and to in turn avoid preventable miscommunications and surprises later in the project.
For example, for a recent freelance project that aimed to design a digital platform that facilitates US student advisors' documentation and communication, we conducted an online kickoff with the client, a subject-matter expert, and two other designers. We used a combination of canvases on Miro: one with questions from "Manual of Me" (to get to know each other), a Team Canvas (to express expectations), and a version of the Project Canvas to align on scope, timeline, and other practical matters.
The above is the traditional purpose of a kickoff. But just as important as expressing expectations is agreeing on what success means for the project—in terms of desirability, viability, feasibility, and ethics. What are the objectives in each dimension?
Agreement on what success means at such an early stage is crucial because you can rely on it for the remainder of the project. If, for example, the design team wants to build an inclusive app for a diverse user group, they can raise diversity as a specific success criterion during the kickoff. If the client agrees, the team can refer back to that promise throughout the project. "As we agreed in our first meeting, having a diverse user group that includes A and B is necessary to build a successful product. So we do activity X and follow research process Y." Compare those odds to a situation in which the team didn't agree to that beforehand and had to ask for permission halfway through the project. The client might argue that that came on top of the agreed scope—and she'd be right.
In the case of this freelance project, to define success I prepared a round canvas that I call the Wheel of Success. It consists of an inner ring, meant to capture ideas for objectives, and a set of outer rings, meant to capture ideas on how to measure those objectives. The rings are divided into five dimensions of successful design: healthy, equitable, sustainable, desirable, feasible, and viable.
We went through each dimension, writing down ideas on digital sticky notes. Then we discussed our ideas and verbally agreed on the most important ones. For example, our client agreed that sustainability and progressive enhancement are important success criteria for the platform. And the subject-matter expert emphasized the importance of including students from low-income and disadvantaged groups in the design process.
After the kickoff, we summarized our ideas and shared understanding in a project brief that captured these aspects:
- the project's origin and purpose: why are we doing this project?
- the problem definition: what do we want to solve?
- the concrete goals and metrics for each success dimension: what do we want to achieve?
- the scope, process, and role descriptions: how will we achieve it?
With such a brief in place, you can use the agreed-upon objectives and concrete metrics as a checklist of success, and your design team will be ready to pursue the right objective—using the tools, methods, and metrics at their disposal to achieve ethical outcomes.
Conclusion
Over the past year, quite a few colleagues have asked me, "Where do I start with ethical design?" My answer has always been the same: organize a session with your stakeholders to (re)define success. Even though you might not always be 100 percent successful in agreeing on goals that cover all responsibility objectives, that beats the alternative (the status quo) every time. If you want to be an ethical, responsible designer, there's no skipping this step.
To be even more specific: if you consider yourself a strategic designer, your challenge is to define ethical objectives, set the right metrics, and conduct those kick-off sessions. If you consider yourself a system designer, your starting point is to understand how your industry contributes to consumerism and inequality, understand how finance drives business, and brainstorm which levers are available to influence the system on the highest level. Then redefine success to create the space to exercise those levers.
And for those who consider themselves service designers or UX designers or UI designers: if you truly want to have a positive, meaningful impact, stay away from the toolkits and meetups and conferences for a while. Instead, gather your colleagues and define goals for well-being, equity, and sustainability through design. Engage your stakeholders in a workshop and challenge them to think of ways to achieve and measure those ethical goals. Take their input, make it concrete and visible, ask for their agreement, and hold them to it.
Otherwise, I'm genuinely sorry to say, you're wasting your precious time and creative energy.
Of course, engaging your stakeholders in this way can be uncomfortable. Many of my colleagues expressed doubts such as "What will the client think of this?," "Will they take me seriously?," and "Can't we just do it within the design team instead?" In fact, a product manager once asked me why ethics couldn't just be a structured part of the design process—to just do it without spending the effort to define ethical objectives. It's a tempting idea, right? We wouldn't have to have difficult discussions with stakeholders about what values or which key-performance indicators to pursue. It would let us focus on what we like and do best: designing.
But as systems theory tells us, that's not enough. For those of us who aren't from marginalized groups and have the privilege to be able to speak up and be heard, that uncomfortable space is exactly where we need to be if we truly want to make a difference. We can't remain within the design-for-designers bubble, enjoying our privileged working-from-home situation, disconnected from the real world out there. For those of us who have the possibility to speak up and be heard: if we solely keep talking about ethical design and it remains at the level of articles and toolkits—we're not designing ethically. It's just theory. We need to actively engage our colleagues and clients by challenging them to redefine success in business.
With a bit of courage, determination, and focus, we can break out of this cage that finance and business-as-usual have built around us and become facilitators of a new type of business that can see beyond financial value. We just need to agree on the right objectives at the start of each design project, find the right metrics, and realize that we already have everything that we need to get started. That's what it means to do daily ethical design.
For their inspiration and support over the years, I would like to thank Emanuela Cozzi Schettini, José Gallegos, Annegret Bönemann, Ian Dorr, Vera Rademaker, Virginia Rispoli, Cecilia Scolaro, Rouzbeh Amini, and many others.

A day later came the US publication of the first book, BEYOND THE HALLOWED SKY, by Pyr Books and available via Simon and Schuster, with links to Amazon and other online bookshops.

This book has had kind words from North American authors:
Ken Macleod does things nobody else does and this is a terrific read.- Jo Walton, multi-award-winning author of Among Others and What Makes This Book So Great
Sure, some writers knock it out of the park but with Beyond the Hallowed Sky, Ken MacLeod knocks it right out of the solar system! Too often, space opera throws science out the airlock, but MacLeod has given us a believable faster-than-light adventure that will have you racing through the pages at superluminal speed.- Robert J. Sawyer, Hugo Award-winning author of The Oppenheimer Alternative
An exceptional blend of international politics, hard science, and first contact.
- Michael Mammay, author of the Planetside series.
.jpg)
I only learned of the existence of Blood And Steel when it was announced SRS were issuing it on DVD. At the time of writing it still doesn't have an Internet Movie Database entry despite this revival. Presumably that will be fixed soon but it shows how obscure this flick is. Blood And Steel was the working title for Bruce Lee's Enter The Dragon and this film, swiping that, is dedicated to Bruce Lee.
Shot in Buffalo, New York, the opening looks more like a regional horror with a woman in a swimming pool having her throat cut. Then a guy gets killed slasher style. Writer, director and star, Mark Swetland - playing himself - is the brother of one of the victims and he goes after the killers. The death of Swetland's screen sister at the get-go in this movie appears designed to resonate with Bruce Lee's sister - played by Angela Mao - committing suicide to avoid being raped by the bad guys early on in Enter The Dragon.
Swetland gets a lead via a photograph that a martial artist from a local dojo is involved in the murder of his sister. So like Bruce Lee busting up the Japanese dojo in Fist of Fury (Chinese Connection in the USA), he lays waste to this martial arts school with his kung fu skills.
Ransacking the fight school's office, Swetland gets a lead to an industrial company. Turns out both operations provide cover for drug dealing. The industrial company invokes the ice factory in Bruce Lee's The Big Boss (Fists of Fury in the USA) which is a front for an illicit drugs operation. Meanwhile the bad guys hire an outside fighter to take care of Swetland - just as Chuck Norris is called in by the gangsters in Way of the Dragon to deal with Bruce Lee.
The stakes escalate as Swetland attempts to free his kidnapped girlfriend, He defeats the martial arts killer hired to take him out in what looks like a school hall with a stage - guess there is no equivalent of Rome's Colosseum in Buffalo, the venue for the Bruce Lee/Chuck Norris fight in Way of the Dragon (although in reality and glaringly obviously mostly shot on a Hong Kong film set).
As the film is sprinting toward its finish, Swetland gears up in a yellow jumpsuit like Bruce Lee in Game of Death and besieges the bad guys' HQ. Of course, Swetland triumphs, avenging his sister's death and freeing his girlfriend. Imagine an all American college jock who is also a Bruce Lee super fan acting out his hero worship by trying to role elements from five Little Dragon flicks into a single script which he also directs and stars in, and you'll have a pretty good handle on this movie.
The martial arts and action stunts - some involving motorcycles fights/chases rather like those added to Game of Death after Bruce Lee's death - are surprisingly good for an American no budget flick. Swetland's Bruce Lee muggings during breaks in the fights are too restrained - he's too much the good guy college jock to indulge in nose thumbing levels of cockiness, although that has proved in the past to be a sure-fire route to Brucesploitation schlock of the first water.
In terms of content this would make the core of the Brucesploitation genre as I theorised it in my book Re-Enter The Dragon. However being core is also dictated to a degree by being known to Brucesploitation enthusiasts since genre is socially negotiated - and because it isn't, Blood and Steel slides back into being part of the periphery. That could change but there's a shortage of groovy seventies stylings on show here - it was made a decade too late - and I'd say this film will ultimately prove to be of more interest to those who dig American regional film than martial arts fans.
Vanilli's Blockchain Busting Musical Experience "R.U. Cyber.. R.U. Against NFTs?"
Immediate release from: 03/03/2023
"AI-Musement Park comprises a cornucopia of performances / talks / happenings /
documentary & discussion about AI, Intelligences, technocapitalism’s more than
pressing-ongoing urgencies." -Eleanor Dare, Cambridge University & AI-Musement Park
R.U. Cyber.. R.U. Against NFTs? An original AI-Musement Park, PlayLa.bZ & MONDO 2000
History Project human and machine learning co-creation, taking the perspective of an AI that is
training itself on the R.U. Sirius & MONDO Vanilli ‘I’m Against NFT’s’ song lyrics, exploring a
surreal, mind melting and multi-dimensional 360 world of paradoxes and conflicting rules.
"Mondo Vanilli was originally intended to be a virtual reality band exploding all
assumptions about property and propriety in the 1990s. Today fabrication becomes de
rigueur as the connection to the real is intentionally confused by the banal political
tricksters of power and profitability… while storms pound our all-too-human bodies and
communities. I am thrilled to finally see MONDO Vanilli in it's appropriate context.
Immersive. Come play in the simulacra one more time" -R.U. Sirius, MONDO 2000
R.U. Cyber.. R.U. Against NFTs? Is a satirical, irreverent block-chain busting commentary on
the propaganda relations fueled ‘Web 3’ hype around non-fungible tokens and the broader
issues that underpin our algorithmically massaged hyper-connected infinite scrolls and trolls
age. Challenging our assumptions about the nature of technology, creativity, and value,
reminding us that the digital world is shaped by powerful forces that determine what is valued
and what is not, and a click is not always for free.
Join Us! On Spring Solstice 2023 For "R.U. Cyber? :// Mondo 2000 History Project Salon"
at MozFest Virtual Plaza & Mozilla Hubs: AI-Musement Park 20th March / 8.30pm EU / GMT
- https://schedule.mozillafestival.org/plaza #Mozfest
- https:/ schedule.mozillafestival.org/session/LSNGRJ-1 #RUCyber
- https:/ www.mondo2000.com/2023/03/02/simcerity-r-u-against-nfts-period-question-mark-or-exclamation-point-join-mondo-vanilli-r-u-sirius-in-vr-on-march-20 #RUAgainstNFTs
R.U. Sirius is an American writer, editor, and media pioneer. Known for being one of key
psychedelic & cyberpunk movement figures. Best known as Mondo 2000 editor-in-chief and at
forefront of 1990s underground cyberculture movement.
Since 2010, MozFest has fueled the movement to ensure the internet benefits humanity, rather
than harms it. This year, your part in the story is critical to our community's mission: a better,
healthier internet and more Trustworthy AI.
Co-founded by PsychFi, FreekMinds & Squire Studios we're a next generation multipotentiality
multi-award-winning, multi-dimensional motion arts experience design laboratory, developing
DIY changemaking createch immersive experiences & software applications for social good
storycraft. Supporters & Friends: Mozilla Festival, Jisc: Digifest, Beyond Games, Tate Modern,
Furtherfield, Boomtown Festival, Sci-Fi-London, Ravensbourne University London, UAL, East
London Dance, NESTA, Modern Panic, ArtFutura, Kimatica, National Gallery X, Kings College
London, Looking Glass Factory, SubPac, Ecologi, The JUMP, BOM Labs, Mondo 2000
PR Contact: James E. Marks, Tel: 07921 523438 @: jem@playla.bz Twitter: @GoGenieMo
The post Turn On, Tune In, Boot Up! For MozFest 2023: appeared first on Mondo 2000.
image by Chad Essley Simcerity: I’m Against NFTs (It doesn’t matter much to me) 1: WE ARE DUCHAMPIAN OF THE WORLD When Marcel Duchamp — possibly at the suggestion of Baroness Elsa von Freytag-Loringhoven — dropped that urinal on an art gallery back in 1917, he signaled the world of art, contemporariness, galleries and capital that their was a new jest in town. image by Jay Cornell Value was to be...
Predictably I think, this is exactly the form a lot of the criticisms of the piece have taken. Lots of the contra comments on Facebook and Twitter have adopted exactly that "tone ... as if they were a schoolteacher marking a child's work, or a psychiatrist assessing a patient" MF identifies.
People have tended to refer to the Vampires' Castle piece using words like "crude" or "incoherent". Then there's this slightly noxious piece (which, with its link to a photoshopped caricature, verges on character assassination). The starting point of the critique here is that "the reasons given [by MF in the Vampires' Castle article] ... do not lead to the conclusions he offers", that "it does not follow its own stated reasons". In other words, the teacher steps in to reprimand the pupil who hasn't polished his argument just-so.
FFS, the article is actually called "B-grade politics"!
Here again: "... a case of someone who's read a bit of philosophy and theory but simply doesn't understand the subtlety of the claims advanced therein."
And here: "What I would recommend Fisher is to do some reading". [sic]
Stepping outside of the internecine left for a moment, right-wing blogger Harry Mount made a very similar move earlier this month when he tried to discredit the "spoilt and childish" Russell Brand. Apparently, Brand's big problem in his journalistic writing is his overuse of "long, Latinate words that desperately scream 'I'm clever' at the reader". So according to Mount, Brand should "grow up" and "get a little more Anglo-Saxon" in his writing. These are highly contentious issues of style, about which there has been much debate for aeons. But Mount offers his maxim (Anglo-Saxon words=good, Latinate prose=bad) with the absolute authority of the High Tory schoolmaster.
Very different examples, but I think they're evidence of exactly the sort of imperious "reprimanding" tendency the Vampires' Castle conceit is trying to expose and challenge.
Very honored to have been asked to contribute to this great volume, to be published, inshallah, in October. Look for it! My chapter is entitled: "Rai, World Music, and Islam." More details here.

New video for Gimme Helter by Satori D 2023 Music MONDO Vanilli from IOU Babe 1994 (Scrappi DuChamp – Jonathan Burnside) Comments regarding co-creating and producing Gimme Helper for MONDO Vanilli and about Trent Reznor whose erstwhile record label Nothing had (sort of) signed MONDO Vanilli and paid for the studio time to produce an album. by Jonathan Burnside as told to R.U. Sirius First…

As you all know by now, we're causing the world to heat up. We've not got to a stage where we've stabilised the global temperature. In fact, we seem to be nowhere near stabilisation. Yet strangely there are still a few global warming deniers floating about. These deniers are arguing once again that an ice age is coming, or it's started cooling, or global warming has stopped, or "CO2 warming is a hoax" or some such nonsense. It's prompted me to write another article about how global temperature has been changing.
According to GISS NASA, the average global surface temperature anomaly for 2022 was 0.89 °C. This is 0.13°C below the hottest year so far. The hottest was 2020 at 1.02°C above the 1951 to 1980 average.
Below is a chart of the average of 12 months to December each year.
Figure 1 | Global mean surface temperature anomaly for the 12 months to December each year. The base period is 1951-1980. Data source: GISS NASA
As you may know, each of the five decades since 1972-1981 has been hotter than the previous one. The chart below shows how the world has been heating up since early last century. The chart includes a dotted line showing the mean annual temperature for the 20th century. Let's see what next year brings, especially if there is another El Nino.

Figure 2 | Global mean surface temperature anomaly by decade. The base period is 1951-1980. Data source: GISS NASA
I've also included a chart grouped by averages over 20 year periods showing how the rate of warming increased quite a bit over the latest 20 year period.

Figure 3 | Global mean surface temperature anomaly by 20 year periods. The base period is 1951-1980. Data source: GISS NASA
It's the increase in greenhouse gas that's causing warming!
Climate disinformers still exist, would you believe. Even after all the weather disasters of the past twenty years or so, there are still people who spread lies for various reasons.
Peter Sinclair has talked about this on Climate Denial Crock of the Week. Apparently one of the "paid to disinform" brigade is running about shouting "CO2 warming is a hoax". It's no longer hard to believe that even Elon Musk has been spreading denial on Twitter, which he seems to have bought (for $44bn, mind you) mainly to promote the spread of harmful lies and neo-fascist propaganda. (It's changed from being social media to becoming his personal anti-social blog).
Anyway, about that junk science claim. To support his latest greenhouse gas denial, Milloy claims there has been cooling for the last eight years, since 2015. He wrote: "Per NOAA data, we have emitted ~450 billion tons of CO2 since 2015 (14% of total manmade atmospheric CO2) yet the planet has cooled."
The claim of cooling from 2015 is obviously nonsensical cherry-picking, which plenty of other people have pointed out. The average temperature for the past 8 years from 2015 to 2022 inclusive is 0.93°C above the 1951 to 1980 mean. The last 8 years has seen the biggest single increase of all, a whopping 0.27C. Before that, the biggest 8 year increase was 0.17C in the period 1999 to 2006 inclusive.
Here's a chart showing the rise in global temperatures to 2022, in eight year blocks.

Figure 4 | Global mean surface temperature anomaly in 8 year blocks. The base period is 1951-1980. Data source: GISS NASA
That's not all. Milloy, who tweets as @junkscience (which is what it is), wimped out on a prediction. He obviously couldn't bring himself to make an ice age cometh claim. That would be too silly, even for him. Instead he predicted that if there was an El Nino this year the global temperature would rise. Duh! He added that when it finished the temperature would "steady/slightly decline (again) as El Nino ebbs". That's not as certain, but the odds are that will happen, too.
However! Notice that Milloy doesn't mention the last three years of La Nina, which has a cooling effect on temperature - if all else is equal, which it isn't!
To show his cherry pick more clearly. The first chart shows the period Milloy is using for his false claim that greenhouse gases don't keep the earth warm. As you can see, Milloy used the old, tired denier ploy of starting with two years of El Nino (hot years), and finishing in the third year of La Nina (slighly less hot years).

Figure 5 | Mean surface temperature anomalies from 2015 to 2022 from the 1951-1980 mean. Blue columns are La Nina years and orange columns and El Nino years. Data sources: GISS NASA and Bureau of Meteorology, Australia.
The second chart shows it in context of 50 years of warming. Each

Figure 6 | Mean surface temperature anomalies from 1973 to 2022 from the 1951-1980 mean. Blue columns are La Nina years and orange columns and El Nino years. Data sources: GISS NASA and Bureau of Meteorology, Australia.
Now what the sharp-eyed and even the less eagle-eyed among you may notice is the El Nino years are getting hotter and the La Nina years are also getting hotter. It's not ENSO that's causing global warming. It's the greenhouse gas we're putting into the atmosphere, a large portion of which are all the waste products from the fossil fuels we're burning.
Cold blasts of hot air from the pastHere's a reminder of denier desperation and how disinformers have been feeding the deluded with false claims of "it hasn't warmed since" since, well, certainly since I've been blogging. In June 2013 I pointed out the absurdity when Anthony Watts' denier blog, WattsUpWithThat claimed it hadn't warmed since 1980, which was ridiculously wrong at the time, and it's warmed a lot more since then. Moving from the absurd to the more absurd, in October 2017 denier Anthony Watts crowed "it hasn't warmed since 2016". (2016 was a very hot El Nino year, yet 2020 was even hotter and there wasn't an El Nino that year.)
Where was it hot in 2022?
Last year was again hot almost everywhere. On the charts below as the legend shows, blue is for below the 1951-1980 average, and yellow/orange/red is above. Move the arrow at the left to the right to compare 2022 with 2020, which was the hottest year on record (so far). The cooler Pacific region of La Nina is clearly visible on the 2022 map.


Figure 6 | Maps showing mean surface temperature anomalies for 2022 and 2020 from the 1951-1980 mean. Data source: GISS NASA