Description:
Web: https://lauren.vortex.com/
XML: https://lauren.vortex.com/feed
Last Fetch: 20-Feb-26 5:20am
Category: Weblogs
Active: Yes
Failures: 0
Refresh: 240 minutes
Expire: 4 weeks

Fetch now | Edit | Empty | Delete
All the news that fits
27-Mar-23
Lauren Weinstein's Blog [ 19-Mar-19 6:21pm ]

Within hours of the recent horrific mass shooting in New Zealand, know-nothing commentators and pandering politicians were already on the job, blaming Facebook, Google’s YouTube, and other large social media platforms for the spread of the live attack video and the shooter’s ranting and sickening written manifesto. 

While there was widespread agreement that such materials should be redistributed as little as possible (except by Trump adviser Kellyanne Conway, who has bizarrely recommended everyone read the latter, thus playing into the shooter’s hands!), the political focus quickly concentrated on blaming Facebook and YouTube for the sharing of the video, in its live form and in later recorded formats.

Let’s be very clear about this. While it can be argued that the very large platforms such as YouTube and Facebook were initially slow to fully recognize the extent to which the purveyors of hate speech and lying propaganda were leveraging their platforms, they have of late taken major steps to deal with these problems, especially in the wake of breaking news like the NZ shooting, including taking various specific actions regarding takedowns, video suggestions, and other related issues as recommended publicly by various observers including myself.

Of course this does not mean that such steps can be 100% effective at very large scales. No matter how many copies of such materials these firms successfully block, the ignorant refrains of “They should be able to stop them all!” continue.

In fact, even with significant resources to work with, this is an extremely difficult technical problem. Videos can be surfaced and altered in a myriad number of ways to try bypass automated scanning systems, and while advanced AI techniques combined with human assets will continually improve these detection systems, absolute perfection is not likely in the cards for the foreseeable future, or more likely ever.

Meanwhile, other demands being bandied about are equally specious.

Calls to include significant time delays in live streams ignore the fact that these would destroy educational live streams and other legitimate programming of all sorts where creators are interacting in real time with their viewers, via chat or other means. Legitimate live news streams of events critical to the public interest could be decimated.

Demands that all uploaded videos be fully reviewed by humans before becoming publicly available are equally utterly impractical. Even with unlimited resources you couldn’t hire enough people to completely preview the enormous numbers of videos being uploaded every minute. Not only would full previews be required — since a prohibited clip could be spliced into permitted footage — there would still be misidentifications. 

Even if you limited such extensive preview procedures to “new” users of the platforms, there’s nothing to stop determined evil from “playing nice” long enough for restrictions to be lifted, and then orchestrating their attacks.

Again, machine learning in concert with human oversight will continue to improve the systems used by the major platforms to deal with this set of serious issues.

But frankly, those major platforms — who are putting enormous resources into these efforts and trying to remove as much hate speech and associated violent content as possible — are not the real problem. 

Don’t be fooled by the politicians and “deep pockets”-seeking regulators who claim that through legislation and massive fines they can fix all this.

In fact, many of these are the same entities who would impose global Internet censorship to further their own ends. Others are the same right-wing politicians who have falsely accused Google of political bias due to Google’s efforts to remove from their systems the worst kinds of hate speech (of which much more spews forth from the right than the left).

The real question is: Where is all of this horrific hate speech originating in the first place? Who is creating these materials? Who is uploading and re-uploading them?

The problem isn’t the mainstream sites working to limit these horrors. By and large it’s the smaller sites and their supportive ISPs and domain registrars who make no serious efforts to limit these monstrous materials at all. In some cases these are sites that give the Nazis and their ilk a nod and a wink and proclaim “free speech for all!” — often arguing that unless the government steps in, they won’t take any steps of their own to control the cancer that metastasizes on their sites. 

They know that at least in the U.S., the First Amendment protects most of this speech from government actions. And it’s on these kinds of sites that the violent racists, antisemites, and other hateful horrors congregate, encouraged by the tacit approval of a racist, white nationalist president.

You may have heard the phrase “free speech but not free reach.” What this means is that in the U.S. you have a right to speak freely, even hatefully, so long as specific laws are not broken in the process — but this does not mean that non-governmental firms, organizations, or individuals are required to help you amplify your hate by permitting you the “reach” of their platforms and venues.

The major firms like Google, Facebook, and others who are making serious efforts to solve these problems and limit the spread of hate speech are our allies in this war. Our enemies are the firms that either blatantly or slyly encourage, support, or tolerate the purveyors of hate speech and the violence that so often results from such speech.

The battle lines are drawn. 

–Lauren–

UPDATE (February 28, 2019): More updates on our actions related to the safety of minors on YouTube

 – – –

For vast numbers of persons around the globe, YouTube represents one of the three foundational “must have” aspects of a core Google services triad, with the other two being Google Search and Gmail. There are many other Google services of course, but these three are central to most of our lives, and I’d bet that for many users of these services the loss of YouTube would be felt even more deeply than the loss of either or both of the other two!

The assertion that a video service would mean so much to so many people might seem odd in some respects, but on reflection it’s notable that YouTube very much represents the Internet — and our lives — in a kind of microcosm.

YouTube is search, it’s entertainment, it’s education. YouTube is emotion, nostalgia, and music. YouTube is news, and community, and … well the list is almost literally endless.

And the operations of YouTube encompass a long list of complicated and controversial issues also affecting the rest of the Internet — decisions regarding content, copyright, fair use, monetization and ads, access and appeals, and … yet another very long list.

YouTube’s scope in terms of numbers of videos and amounts of Internet traffic is vast beyond the imagination of any mere mortal beings, with the exception of Googlers like the YouTube SREs themselves who keep the wheels spinning for the entire massive mechanism.

In the process of growing from a single short video about elephants at the zoo (more about that 2005 video in a moment) into a service that I personally can’t imagine living without, YouTube has increasingly intersected with the entire array of human social issues, from the most beatific, wondrous, and sublime — to the most crass, horrific, and evil.

I’ve discussed all of these aspects of YouTube — and my both positive and negative critiques regarding how Google has dealt with them over time — in numerous past posts over the years. I won’t even bother listing them here — they’re easy to find via search.

I will note again though that — especially of late — Google has become very serious about dealing with inappropriate content on YouTube, including taking some steps that I and others have long been calling for, such as removal of dangerous “prank and dare” videos, demonetization and general form de-recommendation of false “conspiracy” videos, and just announced, demonetization and other utterly appropriate actions against dangerous “anti-vaccine” (aka “anti-vaxx”) videos. 

This must be an even more intense time than usual for the YouTube policy folks up in San Bruno at YouTube HQ — because over the last few days yet another massive controversy regarding YouTube has erupted, this time one that has been bubbling under the surface for a long time, and suddenly burst forth dramatically and rather confusingly as well, involving the “hijacking” of innocent YouTube videos’ comments by pedophiles.

YouTube comments are a fascinating example of often stark contrasts in action. Many YouTube viewers just watch the videos and ignore comments completely. Other viewers consider the comments to be at least as important as the videos themselves. Many YouTube uploaders — I’ll refer to them as creators going forward in this post — are effectively oblivious to comments even on their own videos — which, given that the default setting for YouTube videos is to permit comments without any moderation — has become an increasingly problematic issue.

My own policy (started as soon as the functionality to do so became available) has always been to set my own YouTube videos to “moderated” mode — I must approve individual comments before they can appear publicly. But that takes considerable work, even with relatively low viewership videos like mine. Most YouTube creators likely never change the default comments setting, so comments of all sorts can appear and accumulate largely unnoticed by most creators.

In fact, a few minutes ago when I took another look at that first YouTube video (“Me at the zoo”) to make sure that I had the date correct, I noticed that it now has (as I type this) about 1.64 million comments. Every 5 or 10 seconds a new comment pops up on there, virtually all of them either requests for viewers to subscribe to other YouTube channels, or various kinds of more traditional spams and scams.

Obviously, nobody is curating the comments on this historic video. And this is the same kind of situation that has led to the new controversy about pedophiles establishing a virtual “comments network” of innocent videos involving children. It’s safe to assume that the creators of those videos haven’t been paying attention to the evil comments accumulating on those videos, or might not even know how to remove or otherwise control them.

There have already been a bunch of rather wild claims made about this situation. Some have argued that YouTube’s suggestion engine is at fault for suggesting more similar videos that have then in turn had their own comments subverted. I disagree. The suggestion algorithm is merely recommending more innocent videos of the same type. These videos are not themselves at fault, the commenters are the problem. In fact, if YouTube videos didn’t have comments at all, evil persons could simply create comments on other (non-Google) sites that provided links to specific YouTube videos. 

It’s easy for some to suggest simply banning or massively restricting the use of comments on YouTube videos as a “quick fix” for this dilemma. But that would drastically curtail the usefulness of many righteous videos.

I’ve seen YouTube entertainment videos with fascinating comment threads from persons who worked on historic movies and television programs or were related to such persons. For “how-to” videos on YouTube — one of the most important and valuable categories of videos as far as I’m concerned — the comment threads often add enormous value to the videos themselves, as viewers interact about the videos and describe their own related ideas and experiences. The same can be said for many other categories of YouTube videos as well — comments can be part and parcel of what makes YouTube wonderful.

To deal with the current, highly publicized crisis involving comment abuse — which has seen some major advertisers pulling their ads from YouTube as a result — Google has been disabling comments on large numbers of videos, and is warning that if comments are turned back on by these video creators and comment abuse occurs again, demonetization and perhaps other actions against those videos may occur.

The result is an enormously complex situation, given that in this context we are talking almost entirely about innocent videos where the creators are themselves the victims of comment abuse, not the perpetrators of abuse.

While I’d anticipate that Google is working on methods to algorithmically better filter comments at scale to try help avoid these comment abuses going forward, this still likely creates a situation where comment abuse could in many cases be “weaponized” to target innocent individual YouTube creators and videos, to try trigger YouTube enforcement actions against those innocent parties.

This could easily create a terrible kind of Hobson’s choice. For safety’s sake, these innocent creators may be forced to disable comments completely, in the process eliminating much of the value of their videos to their viewers. On the other hand, many creators of high viewership videos simply don’t have the time or other resources to individually moderate every comment before it appears.

A significant restructuring of the YouTube comments ecosystem may be in order, to permit the valuable aspects of comments to continue on legitimate videos, while still reducing the probabilities of comment abuse as much as possible. 

Perhaps it might be necessary to consider the permanent changing of the default comments settings away from “allowed” — to either “not allowed” or “moderated” — for new uploads (at least for certain categories of videos), especially for new YouTube creators. But given that so many creators never change the defaults, the ultimate ramifications and possible unintended negative consequences of such a significant policy alteration appear difficult to predict. 

Improved tools to aid creators in moderating comments on high viewership videos would also seem to be in focus — perhaps by leveraging third-party services or trusted viewer communities.

There are a variety of other possible approaches as well.

It appears certain that both YouTube itself and YouTube creators have reached a critical crossroads, a junction that successfully navigating will likely require some significant changes going forward, if the greatness of YouTube and its vast positive possibilities for creators are to be maintained or grow.

–Lauren–

A few weeks ago, I noted the very welcome news that Google’s YouTube is cracking down on the presence of dangerous prank and dare videos, rightly categorizing them as potentially harmful content no longer permitted on the platform. Excellent.

Even more recently, YouTube announced a new policy regarding the category of misleading and clearly false “conspiracy theory” videos that would sometimes appear as suggested videos.

Quite a few folks have asked me how I feel about this newer policy, which aims to prevent this category of videos from being suggested by YouTube’s algorithms, unless a viewer is already subscribed to the YouTube channels that uploaded the videos in question.

The policy will take time to implement given the significant number of videos involved and the complexities of classification, but I feel that overall this new policy regarding these videos is an excellent compromise.

If you’re a subscriber to a conspiracy video hosting channel, conspiracy videos from that channel would still be suggested to you.

Otherwise, if you don’t subscribe to such channels, you could still find these kinds of videos if you purposely search for them — they’re not being removed from YouTube.

A balanced approach to a difficult problem. Great work!

–Lauren–

It’s getting increasingly difficult to keep up with Google’s User Trust Failures these days, as they continue to rapidly shed “inconvenient” users faster than a long-haired dog. I do plan a “YouTube Live Chat” to discuss these issues and other Google-related topics, tentatively scheduled for Tuesday, February 12 at 10:30 AM PST. The easiest way to get notifications about this would probably be to subscribe to my main YouTube channel at: https://www.youtube.com/vortextech (be sure to click on the “bell” after subscribing if you want real time notifications). I rarely promote the channel but it’s been around for ages. Don’t expect anything fancy.

In the meantime, let’s look at Google’s latest abominable treatment of users, and this time it’s users who have actually been paying them with real money!

As you probably know, I’ve recently been discussing Google’s massive failures involving the shutdown of Google+ (“Google Users Panic Over Google+ Deletion Emails: Here's What's Actually Happening” – https://lauren.vortex.com/2019/02/04/google-users-panic-over-google-deletion-emails-heres-whats-actually-happening).

Google has been mistreating loyal Google users — among the most loyal that they have and who often are decision makers about Google commercial products — in the process of the G+ shutdown on very short notice.

One might think that Google wouldn’t treat their paying customers as badly — but hey, you’d be wrong.

Remember when Google Fiber was a “thing” — when cities actually competed to be on the Google Fiber deployment list? It’s well known that incumbent ISPs fought against Google on this tooth and nail, but there was always a suspicion that Google wasn’t really in this for the long haul, that it was really more of an experiment and an effort to try jump start other firms to deploy fiber-based Internet and TV systems.

Given that the project has been downsizing for some time now, Google’s announcement today that they’re pulling the plug on the Louisville Google Fiber system doesn’t come as a complete surprise.

But what’s so awful about their announcement is the timing, which shows Google’s utter contempt for their Louisville fiber subscribers, on a system that only got going around two years ago.

Just a relatively short time ago, in August 2018, Google was pledging to spend the next two years dealing with the fiber installation mess that was occurring in their Louisville deployment areas (“Google Fiber announces plan to fix exposed fiber lines in the Highlands” – https://www.wdrb.com/news/google-fiber-announces-plan-to-fix-exposed-fiber-lines-in/article_fbc678c3-66ef-5d5b-860c-2156bc2f0f0c.html).

But now that’s all off. Google is giving their Louisville subscribers notice that they have only just over two months before their service ends. Go find ye another ISP in a hurry, oh suckers who trusted us!

Google will provide those two remaining months’ service for free, but that’s hardly much consolation for their subscribers who now have to go through all the hassles of setting up alternate services with incumbent carriers who are laughing their way to the bank.

Imagine if one of those incumbent ISPs like a major telephone or cable company tried a shutdown stunt like this with notice of only a couple of months? They’d be rightly raked over the coals by regulators and politicians.

Google claims that this abrupt shutdown of the Louisville system will have no impact on other cities where Google Fiber is in operation. Perhaps so — for now. But as soon as Google finds those other cities “inconvenient” to serve any longer, Google will most likely trot out the guillotines to subscribers in those cities in a similar manner. C’mon, after treating Louisville this way, why should Fiber subscribers in other cities trust Google when it comes to their own Google-provided services?

Ever more frequently now, this seems to be The New Google’s game plan. Treat users — even paying users — like guinea pigs. If they become inconvenient to care for, give them a couple of months notice and then unceremoniously flush them down the toilet. Thank you for choosing Google!

Google is day by day becoming unrecognizable to those of us who have long felt it to be a great company that cared about more than just the bottom line.

Googlers — the rank and file Google employees and ex-employees whom I know — are still great. Unfortunately, as I noted in “Google's Brain Drain Should Alarm Us All” (https://lauren.vortex.com/2019/01/12/googles-brain-drain-should-alarm-us-all), some of their best people are leaving or have recently left, and it becomes ever more apparent that Google’s focus is changing in ways that are bad for consumer users and causing business users to question whether they can depend on Google to be a reliable partner going forward (“The Death of Google” – https://lauren.vortex.com/2018/10/08/the-death-of-google).

In the process of all this, Google is making itself ever more vulnerable to lying Google Haters — and to pandering politicians and governments — who hope to break up the firm and/or suck in an endless money stream of billions in fines from Google to prop up failing 20th century business models.

The fact that Google for the moment is still making money hand over fist may be partially blinding their upper management to the looming brick wall of government actions that could potentially stop Google dead in its tracks — to the detriment of pretty much everyone except the politicos themselves.

I remain a believer that suggested new Google internal roles such as ombudspersons, user advocates, ethics officers, and similar positions — all of which Google continues to fight against creating — could go a long way toward bringing balance back to the Google equation that is currently skewing ever more rapidly toward the dark side.

I continue — perhaps a bit foolishly — to believe that this is still possible. But I am decreasingly optimistic that it shall come to pass.

–Lauren–

Two days ago I posted “Google's Google+ Shutdown Emails Are Causing Mass Confusion” (https://lauren.vortex.com/2019/02/02/googles-google-shutdown-emails-are-causing-mass-confusion) — and the reactions I’m receiving make it very clear that the level of confusion and panic over this situation by vast numbers of Google users is even worse than I originally realized. My inbox is full of emails from worried users asking for help and clarifications that they can’t find or get from Google (surprise!) — and my Google+ (G+) threads on the topic are similarly overloaded with desperate comments. People are telling me that their friends and relatives have called them, asking what this all means.

Beyond the user trust abusive manner in which Google has been conducting the entire consumer Google+ shutdown process (even their basic “takeout” tool to download your own posts is reported to be unreliable for G+ downloads at this point), their notification emails, which I had long urged be sent to provide clarity to users, instead were worded in ways that have massively confused many users, enormous numbers of whom don’t even know what Google+ actually is. These users typically don’t understand the manners in which G+ is linked to other Google services. They understandably fear that their other Google services may be negatively affected by this mess.

Since Google isn’t offering meaningful clarification for panicked users — presumably taking its usual “this too shall pass” approach to user support problems — I’ll clarify this all as succinctly as I can — to the best of my knowledge — right here in this post.

UPDATE (February 5, 2019): Google has just announced that the Web notification panel primarily used to display G+ notifications will be terminated this coming March 7. This cuts another month off the useful life of G+, right when we’ll need notifications the most to coordinate with our followers for continuing contacts after G+. Without the notification panel, this will be vastly more difficult, since the alternative notifications page is very difficult to manage. No apologies. No nuthin’. First it was August. Then April. Now March. Can Google mistreat consumer users any worse? You can count on it!

Here’s an important bottom line: Core Google Services that you depend upon such as Gmail, Drive, Photos, YouTube, etc. will not be fundamentally affected by the G+ shutdown, but in some cases visible effects may occur due to the tight linkages that Google created between G+ and other services.

No, your data on Gmail or Drive won’t be deleted by the Google+ shutdown process. Your uploaded YouTube videos won’t be deleted by this.

However, outside of the total loss of user trust by loyal Google+ users, triggered by the kick in the teeth of the Google+ shutdown (without even provision of a tool to help with followers migration – “If Google Cared: The Tool That Could Save Google+ Relationships” (https://lauren.vortex.com/2019/02/01/if-google-cared-the-tool-that-could-save-google-relationships), there will be a variety of other Google services that will have various aspects “break” as a result of Google’s actions related to Google+.

To understand why, it’s important to understand that when Google+ was launched in 2011, it was positioned more as an “identity” product than a social media product per se. While it might have potentially competed with Facebook in some respects, creating a platform for “federated” identity across a wide variety of applications and sites was an important goal, and in the early days of Google+, battles ensued over such issues as whether users would continue to be required to use their ostensibly “real” names for G+ (aka, the “nymwars”).

Google acted to integrate this identity product — that is, Google+ — into many Google services and heavily promoted the use of G+ “profiles” and widgets (comments, +1 buttons, “follow” buttons, login functions, etc.) for third-party sites as well.

In some cases, Google required the creation of G+ profiles for key functions on other services, such as for creating comments on YouTube videos (a requirement that was later dropped as user reactions in both the G+ and YouTube communities where overwhelmingly negative).

Now that consumer G+ has become an “inconvenience” to Google, they’re ripping it out by the roots and attempting to completely eliminate any evidence of its existence, by totally removing all G+ posts, comments, and the array of G+ functions that they had intertwined with other services and third-party sites.

This means that anywhere that G+ comments have continued to be present (including Google services like “Blogger”), those comments will vanish. Users whom Google had encouraged at other sites and services to use G+ profile identities (rather than the underlying Google Account identities) will find those capabilities and profiles will disappear. Sites that embedded G+ widgets and functions will have those capabilities crushed, and their page formats in many cases disrupted as a result. Photos that were stored only in G+ and not backed up into the mainstream Google Photos product will reportedly be deleted along with all the G+ posts and comments.

And then on top of all this other Google-created mayhem related to their mishandling of the G+ shutdown, we have those panic-inducing emails going out to enormous numbers of Google users, most of whom don’t understand them. They can’t get Google to explain what the hell is going on, especially in a way that makes sense if you don’t understand what G+ was in the first place, even if somewhere along the line Google finessed you into creating a G+ account that you never actually used.

There’s an old saying — many of you may have first heard it stated by “Scotty” in an old original “Star Trek” episode: “Fool me once, shame on you — fool me twice, shame on me!”

In a nutshell, this explains why so many loyal users of great Google services — services that we depend on every day — are so upset by how Google has handled the fiasco of terminating consumer Google+. This applies whether or not these users were everyday, enthusiastic participants in G+ itself (as I’ve been since the first day of beta availability) — or even if they don’t have a clue of what Google+ is — or was.

Even given the upper management decision to kill off consumer Google+, the actual process of doing so could have been handled so much better — if there was genuine concern about all of the affected users. Frankly, it’s difficult to imagine realistic scenarios of how Google could have bungled this situation any worse.

And that’s very depressing, to say the least.

–Lauren–

UPDATE (February 4, 2019): Google Users Panic Over Google+ Deletion Emails: Here's What's Actually Happening

– – –

As I have long been urging, Google is finally sending out emails to Google+ account holders warning them of the impending user trust failure that is the Google+ shutdown. However — surprise! — the atrocious way that Google has worded the message is triggering mass confusion from users who don’t even consider themselves to have ever been G+ users, and are now concerned that other Google services such as Photos, Gmail, YouTube, etc. may be shutting down and associated data deleted (“Google Finally Speaks About the G+ Shutdown: Pretty Much Tells Users to Go to Hell” – https://lauren.vortex.com/2019/01/30/google-finally-speaks-about-the-g-shutdown-pretty-much-tells-users-to-go-to-hell).

The underlying problem is that many users have G+ accounts but don’t realize it, and apparently Google is sending essentially the same message to everyone who ever had a G+ account, active or not. Because Google has been aggressively urging the creation of G+ accounts (literally until a few days ago!) many users inadvertently or casually created them, and then forgot about them, sometimes years ago. Now they’re receiving confusing “shutdown” messages and are understandably going into a panic.

UPDATE (February 3, 2019): I’m now receiving reports of users (especially ones receiving the notification emails who don’t recall having G+ accounts) fearing that “all their Google data is going to be deleted” — and also reports of many users who are assuming that these alarming emails about data deletion are fakes, spam, phishing attempts, etc. I’m also receiving piles of messages containing angry variations on “What the hell was Google thinking when they wrote those emails?”

During the horrific period some years ago when Google was REQUIRING the creation of G+ accounts to comment on YouTube (a disaster that I rallied against both outside and inside the company at the time) vast numbers of comments and accounts became tightly intertwined between YouTube and G+, and the ultimate removal of that linkage requirement left enormous numbers of G+ accounts that had really only been created by users for YouTube commenting during that period.

So this new flood of confused and concerned users was completely predictable. If I had written the Google+ shutdown emails, I would have clearly covered these issues to help avoid upsetting Google users unnecessarily. But of course Google didn’t ask me to write the emails, so they followed their usual utilitarian approach toward users that they’re in the process of shedding — yet another user trust failure.

But this particular failure was completely preventable.

Be seeing you.

–Lauren–

UPDATE (February 4, 2019): Google Users Panic Over Google+ Deletion Emails: Here's What's Actually Happening

UPDATE (February 2, 2019): Google's Google+ Shutdown Emails Are Causing Mass Confusion

– – –

One of the questions I’m being frequently asked these days is specifically what could Google have done differently about their liquidation of Google+, given that a decision to do so was irrevocable. Much of this I’ve discussed in previous posts, including those linked within: “Google Finally Speaks About the G+ Shutdown: Pretty Much Tells Users to Go to Hell” (https://lauren.vortex.com/2019/01/30/google-finally-speaks-about-the-g-shutdown-pretty-much-tells-users-to-go-to-hell).

The G+ shutdown process is replete with ironies. The official Google account on G+ is telling users to follow Google on Google competitors like Facebook, Twitter, and Instagram. While there are finally some butter bar banners up warning of the shutdown — as I’ve long been calling for — warning emails haven’t yet apparently gone out to most ordinary active G+ users, but some users who had previously deleted their G+ accounts or G+ pages are reportedly receiving emails informing them that Google is no longer honoring their earlier promise to preserve photos uploaded to G+ — download them now or they’ll be crushed like bugs. 

UPDATE (February 1, 2019): Emails with the same basic text as was included in the G+ help page announcement from January 30 regarding the shutdown (reference is at the “Go to Hell” link mentioned above), are FINALLY beginning to go out to current G+ account holders (and apparently, to some people who don’t even recall ever using G+). 

Google is also recommending that you build blogs or use other social media to keep in touch with your G+ followers and friends after G+ shuts down, but has provided no mechanism to help users to do so. And this is a major factor in Google’s user trust failure when it comes to their handling of this entire situation.

G+ makes it intrinsically difficult to reach out to your followers to get contact information for moving forward. You never know which of your regular posts will actually be seen by any given following user, and even trying to do private “+name” messages within G+ often fails because G+ tends to sort similar profile names in inscrutable ways and in limited length lists, often preventing you from ever pulling up the user whom you really want to contact. This gets especially bad when you have a lot of followers, believe me — I’ve battled this many times trying to send a message to an individual follower, often giving up in despair.

I would assert — and I’m not wholly ignorant of how G+ works — that it would be relatively straightforward to offer users a tool that could be used to ask their followers (by follower circles, en masse, etc.) if they wished to stay in contact, and to provide those followers who were interested in doing so, the means to pass back to the original user a URL for a profile on a different social media platform, or an email address, or hell, even a phone number. Since this would be entirely voluntary, there would be no significant data privacy concerns.

Such a tool could be enormously beneficial to current G+ users, by providing them a simple means to help them stay in touch after G+’s demise in a couple of months. And if Google had announced such a tool, such a clear demonstration of concern about their existing users, rather than trying to wipe them off Google’s servers as quickly as possible and with a minimum of effort, this would have gone far toward proactively avoiding the many user trust concerns that have been triggered and exacerbated by Google’s current game plan for eliminating Google+.

That such a migration assistance tool doesn’t exist — which would have done so much good for so many loyal G+ users, among Google’s most fervent advocates until now — unfortunately speaks volumes about how Google really feels about us.

–Lauren–

UPDATE (February 4, 2019): Google Users Panic Over Google+ Deletion Emails: Here's What's Actually Happening

UPDATE (February 2, 2019): Google's Google+ Shutdown Emails Are Causing Mass Confusion

UPDATE (February 1, 2019): If Google Cared: The Tool That Could Save Google+ Relationships

– – –

For weeks now, I’ve been pounding on Google to get more explicit about their impending shutdown of consumer Google+. What they’ve finally written today on a G+ help page (https://support.google.com/plus/answer/9195133) demonstrates clearly how little that they care about G+ users who have spent years of their lives building up the service, appears to put a lie to key claimed excuses for ending consumer G+, and calls into question the degree to which any consumer or business users of Google should trust the firm’s dedication to any specific services going forward.

The originally announced shutdown date was for August. Then suddenly it was advanced to April (we now know from their new help page post that the official death date is 2 April 2019, though the process of completely deleting everyone from existence may take some months).

The key reasons for the shutdown originally stated by Google were API “security problems” that were obviously blown out of proportion — Google isn’t even mentioning those in their new announcements. Surprise, surprise:

“Given the challenges in creating and maintaining a successful Google+ that meets our consumer users' expectations, we decided to sunset the consumer version of Google+. We're committed to focusing on our enterprise efforts, and will be launching new features purpose-built for businesses.”

Translation: Hey, you’re not paying us anything, bug off!

And as I had anticipated, Google is doing NOTHING to help G+ users stay in touch with each other after the shutdown. In other words, it’s up to you to figure out some way to do it, boys and girls! Now go play on the freeway! Get lost! We just don’t care about you!

Since there’s nothing in Google’s new announcement that contradicts my analysis of this situation in my earlier related posts, I will herewith simply include for reference some of my recent posts related to this topic, for your possible perusal as you see fit.

I’ll note first my post announcing my own private forum that I’ve been forced to create — to try provide a safe home for many of my G+ friends who are being unceremoniously crushed by Google’s betrayal of their trust. Given my very limited resources, creating a new forum at this time was not in my plans, but Google’s shabby treatment of G+ users forced my hand. No matter what else happens in my life, I promise never to treat users of my forum with disrespect and contempt as Google has:

A New Invite-Only Forum for Victims of Google's Google+ Purge
https://lauren.vortex.com/2019/01/05/a-new-invite-only-forum-for-victims-of-googles-google-purge

And here are some of my related posts regarding the Google+ shutdown fiasco, its impacts on users, and related topics:

Google's G+ User Trust Betrayal Gets Worse and Worse
https://lauren.vortex.com/2019/01/29/googles-g-user-trust-betrayal-gets-worse-and-worse

An Important Message from "Google" about Google+
https://lauren.vortex.com/2019/01/22/an-important-message-from-google-about-google

Boot to the Head: When You Know that Google Just Doesn't Care Anymore
https://lauren.vortex.com/2019/01/14/boot-to-the-head-when-you-know-that-google-just-doesnt-care-anymore

Why Google Is Terrified of Its Users
https://lauren.vortex.com/2019/01/06/why-google-is-terrified-of-its-users

Why I No Longer Recommend Google for Many Serious Business Applications
https://lauren.vortex.com/2018/12/20/why-i-no-longer-recommend-google-for-many-serious-business-applications

Can We Trust Google?
https://lauren.vortex.com/2018/12/10/can-we-trust-google

The Death of Google
https://lauren.vortex.com/2018/10/08/the-death-of-google

As Google’s continuing decimation of user trust accelerates, you can count on me having more to say about these situations as we move forward. Take care everyone. Stay strong.

Be seeing you.

–Lauren–

When I recently posted a parody “Message from Google” regarding the upcoming shutdown of consumer Google+, I did not anticipate the wellspring of reactions from Google users, including those who were not specifically Google+ users.

An Important Message from "Google" about Google+ !
https://lauren.vortex.com/2019/01/22/an-important-message-from-google-about-google

(Google Docs Version: https://lauren.vortex.com/google-plus)

I had anticipated many folks saying that the posting was funny but in key respects depressingly true — which they did — but I did not expect my inbox to be flooded with consumer and business users telling me that they were abandoning Google services or not moving operations to Google, due to Google’s shabby treatment of so many users, and I did not realize that I was going to become the focal point for desperate, loyal G+ users asking me questions that Google has been refusing to answer.

In retrospect I shouldn’t have been surprised. To this day, Google has as far as I know not emailed ordinary G+ users about what’s going on, has no informational banners up about the impending shutdown, and (believe it or not!) is still soliciting for new users to join G+ and spend their time following other users and getting to know a service that Google is about to mercilessly destroy!

It’s remarkable. Unfathomable. Disgraceful.

And the questions. G+ users are sending me their questions:

What happens to all of the external web pages and posts that link to public G+ posts? Google taking down those G+ posts will break vast numbers of non-Google pages around the web.

What happens to sites that have deeply embedded G+ APIs for displaying “Plus” counts, follower boxes, G+ site login integrations, and more? What happens to Google Contacts data integrated from G+?

What is the ultimate fate of the actual G+ posts and related data? Do they all suddenly vanish from public view, from the control of their authors? Will they continue to be used internally by Google for ad system, machine learning, or for other purposes?

The list goes on and on.

Meanwhile, Google is hardly saying anything at all. It’s obvious that they’re treating consumer G+ — and all of its loyal users — as inconvenient pariahs, tossing us all into their dumpster as quickly and unceremoniously as possible.

My inbox is full of users both angry and sad, who loved Google but are now feeling like they’ve been pushed out of a car and directly into the path of steamrollers.

I’ve always tried to help with Google-related problems when I could. But I really don’t know what to say to these jilted users abandoned so callously by Google, because frankly I feel the same way about how Google is mistreating us, and Google has not been forthcoming with explanations, answers, or even believable excuses.

It’s obvious that Google just doesn’t care. And perhaps that’s the saddest part of all.

–Lauren–

UPDATE (March 16, 2019): The ads discussed below as appearing on the Roku YouTube app (even when subscribed to YouTube Premium) have now vanished for me — at least for the moment. I have no word as to whether this is a temporary or more long-term change, whether this was a test that has now terminated, or any other additional information. But I’m definitely glad to see those annoying boxes gone, especially the one that was overlaid on the playing videos themselves.

– – –

I pay for YouTube Premium because — among other things — I don’t want to see ads on videos.

But at least through the popular YouTube Roku app, YouTube is now continuously displaying BUY SEASON ads for some video program clips (complete with purchase price) in a blue box on the video control YouTube Roku app “watch pages” — and even worse, for a period of time (around 10 seconds) as a corner ad box overlay on the running videos themselves. The blue box ad is also present whenever you return to the watch page (e.g., by pausing the video), and the overlay ad appears for the same interval every time you begin running the video again. The overlay ad in particular is extremely annoying.

These ads are also present as a box on the regular web-based YouTube watch pages for these clips — where they are less obtrusive but still are ads on an ostensibly ad-free service.

YouTube Premium is promoted as a paid, ad-free service. The presence of these ads on Premium accounts (especially when overlaid on top of running videos — whether limited to Roku devices or ultimately deployed through other display devices as well) is not acceptable.

–Lauren–

(Google Doc version: https://lauren.vortex.com/google-plus)

Google – “You can count on us!”

An important announcement about Google+

Dear Google+ users,

We have some bad news for you. We hope you’re sitting down. If you’re driving, please pull over safely before reading the remainder of this message.

We know that many of you have built major parts of your lives around Google+, beginning back in 2011. Over the years since, we have encouraged you to share your experiences and photos, to build Communities and Collections. We know that large numbers of you have spent hours every day on G+, and have built up networks of friends with whom you communicate every day on G+.

And we know that in our rush to maximize G+ participation and engagement, we made some pretty poor decisions, like that period where we integrated YouTube comments and G+ posts, requiring YouTube commenters to create G+ accounts — managing to upset both communities in the process. But you know the motto — move fast and break things!

Now we just want to get out from under Google+. And you’re going to be the collateral damage. Please understand that it’s nothing personal. It’s just business.

So we’re shutting down G+. We’ll be shutting it down this coming August, uh April, uh as soon as we can locate the Google+ SRE in charge. We’ve been trying to page them for months but they’re not answering. We’re pretty sure that there’s a G+ control dashboard in our systems somewhere — when we find it we’ll pull the switch and you’ll all be history.

We could yank your chains and claim that killing G+ is all about poor engagement and API problems and whatnot, but we know you’d see through that, and frankly we just don’t want you around anymore. You’re more trouble than you’re worth to a firm that is pivoting ever more toward serving businesses who actually pay us with actual money. Of course, many businesses now claim that they’ve lost faith in us due to our behavior killing services and mistreating users on the consumer side, but we’ll throw them some usage credits and they’ll come around. You can always buy user trust!

The ad business just isn’t what it used to be. We need new users in new places! Governments are breathing down our necks, ad blockers are reducing ad impressions and conversions, and a bunch of would be do-gooders are making a fuss about our plans to set up a censored search engine in China. You know how many Chinese are in China? More than you can count on your fingers and toes, believe us!

And speaking of business, we’ll be continuing G+ over on our enterprise/business products, at least until it becomes inconvenient for us to keep doing so. And before you ask, no, you can’t pay for continued access to consumer G+ or bundle it with Google One, and you can’t have a pony or anything like that. Get this through your heads. You’re not our target users or target demographics. We just don’t care about you.

Now, after we’ve said all that, we hope that you won’t get too upset if we ask for your help in killing off G+ with a minimum of public attention from bloggers and the media.

Since we routinely provide the means for you to download your data from Google, you can download your G+ posts before we drive a stake through the heart of the G+ data center clusters. We don’t know what the hell you’re going to do with that data, since you’re going to lose contact with all your followers and friends you’ve built up over the years on G+, but did you really expect us to bother providing a tool to help you stay in contact with them after G+ is tossed into the dumpster? We recommend that you just forget about those people, like we’re forgetting about you. It’s easy with practice.

Oh, here’s another thing. You might expect that with the shutdown of G+ so close, we wouldn’t still be soliciting for new G+ users, and you might think that we’d have “butter bar” banners up warning users of the shutdown and providing continuing updates. You might expect us to email G+ users about what’s going on.

But, c’mon, you know us better than that. Remember, we just don’t care, so there are no banners, no continuing informational updates, and — get this! — we’re still soliciting for new G+ users to sign up, without so much as giving them a clue that they’re signing up for a service that is “dead man walking” already! The poor ignorant slobs! Pretty funny, huh? And the only users we’ve emailed about the G+ shutdown are at sites using our G+ APIs, which we’re going to start dismantling in late January. It’s going to be quite a show, because that’s going to break vast numbers of websites that made the mistake of deeply embedding G+ APIs into their systems. Hey, to quote “Otter” from “Animal House” — “You f*cked up! You trusted us!”

So it’s up to you all to spread the word about what’s going on, because we’ve got better things to do than dealing with G+ losers. You’re so yesterday!

OK, ’nuff said! We’ve already spent more time on this note than we should have, and talking to you guys isn’t advancing any of our careers. Be glad that we’re posting this in a nice dark font that you can actually read — we could have used “Material Design” and then sat here chuckling, knowing that so many of you would be squinting and getting migraine headaches from trying to read this.

But we’re not cruel. We just don’t care about you. There’s a big difference! Please keep that in mind.

Thanks for being the guinea pigs in our social media experiment that was Google+. Now back to your cages!

Best,

Google, Inc.

 – – –

Lauren Weinstein / lauren.vortex.com / 22 January 2019 / https://plus.google.com/+LaurenWeinstein / https://twitter.com/laurenweinstein

Google Contacts — which I use heavily — has now moved over to Google’s horrific “let’s kick people with less than perfect vision in the teeth!” user interface (UI) design. I assume it’s rolling out gradually so you may not have it yet.

But even when you do get it, you STILL may not be able to really see it, because like most of Google’s “material design” UI “refreshes” it’s terrible for anyone who has problems with low contrast fonts. Even at 175% magnification, the fonts are painful to read — and for many users are likely to be impossible to view in a practical manner. And as usual, older users will suffer most at the hands of Google’s UI design changes.

There are a few minor improvements in the new Contacts design relating to form field layouts, and your “notes” for an entry no longer need to be in a restricted-sized box. But those positive changes are rendered meaningless when the fonts overall have been made so much more difficult for so many people to read.

If you talk to Google’s internal accessibility folks about this sort of problem (and I’ve done so, numerous times) you’ll be told that the new design is fine for “most users” and meets formal accessibility standards.

Yet the single most common complaint I get about Google is from users who simply can’t comfortably read or use Google interfaces, and Google is pushing material design into more and more of their products. Google Docs (I use this one heavily also), plus Sheets, Slides, and Sites are also apparently doomed to undergo this change, according to Google.

For the moment, you can still switch back to the familiar version of Contacts (there’s a link for this buried at the bottom of the left sidebar), but we know that Google at some point always ultimately removes the ability to use the older versions of their products.

This situation is rapidly becoming worse and worse for the negatively affected users.

Of course, Google could solve this problem by providing higher contrast UI options, but such options are severely discouraged at Google.

After all, you don’t want to make things easy for those users that you don’t really care about at all, right?

For shame Google. For shame.

–Lauren–

UPDATE (February 10, 2019): Another Positive Move by YouTube: No More General "Conspiracy Theory" Suggestions

When I feel that Google is making policy mistakes, I don’t hesitate to call them out as appropriate. I don’t enjoy doing this, but my goal is to help Google be better, not to see a great company becoming less so.

On the other hand, I much enjoy congratulating Google when they make important policy improvements — and yeah, it’s nice when this involves an area where I’ve long been urging such changes.

So I’m very pleased by Google’s newly announced changes to YouTube acceptable content rules, to significantly crack down on dangerous prank and dare/challenge videos on YouTube.

I’ve written about my concerns in this area many times, for example in “YouTube's Dangerous and Sickening Cesspool of ‘Prank’ and ‘Dare’ Videos” (https://lauren.vortex.com/2017/05/04/youtubes-dangerous-and-sickening-cesspool-of-prank-and-dare-videos), approaching two years ago.

I am not unsympathetic to Google’s philosophical and practical preferences for a “very light touch” when it comes to excluding specific types of content from their YouTube platform. In a perfect world, if all video creators behaved responsibly in the first place, we likely wouldn’t be facing these kinds of challenges at all. But of course, the reality is that irresponsible creators of all sorts permeate vast swaths of the Internet ecosystem.

The new YouTube “Policies on harmful or dangerous Content” (https://support.google.com/youtube/answer/2801964), should in theory go a long way toward appropriately addressing the kinds of concerns that I and others have expressed about dangerously inappropriate videos on YouTube.

Whether the new rules will actually have the desired positive effects will of course depend on how rigorously Google enforces these rules, and in particular whether that enforcement is evenhanded — meaning that large YouTube channels generating significant revenue are subject to the same serious enforcement actions as much smaller channels. 

Time will tell in this regard. But today, as someone who very much loves YouTube and who considers YouTube to be an irreplaceable aspect of my daily life, I want to thank Google for these positive steps toward making YouTube even better for us all. Kudos to the teams!

–Lauren–

If you’ve ever needed more evidence that Google just doesn’t care about users who have become “inconvenient” to their new business models, one need only look at the saga of their ongoing handling of their announced Google+ shutdown.

I’ve previously discussed what I believe to be the actual motivations for this action, that’s suddenly pulling the rug out from beneath many of their most loyal users (“Can We Trust Google?” – https://lauren.vortex.com/2018/12/10/can-we-trust-google). But let’s leave the genesis of this betrayal of users aside, and just look at how Google is handling the actual process of eliminating G+.

What’s the technical term for this that I’m searching for? Oh yes: disgraceful.

We already know about Google’s incredible user trust failure in announcing dates for this process. First it was August. Then suddenly it was April. The G+ APIs (which vast numbers of web sites — including mine — made the mistake of deeply embedding into their sites, we’re told will start “intermittently failing” (whatever that actually means) later this month.

It gets much worse though. While Google has tools for users to download their own G+ postings for preservation, they have as far as I know provided nothing to help loyal G+ users maintain their social contacts — the array of other G+ followers and users with whom many of us have built up friendships on G+ over the years.

As far as Google is concerned, when G+ dies, all of your linkages to your G+ friends are gone forever. You can in theory try to reach out to each one and try to get their email addresses, but private messages on G+ have always been hit or miss, and I’ve had to resort to setting up my own invite-only forum for this purpose (“A New Invite-Only Forum for Victims of Google's Google+ Purge” – https://lauren.vortex.com/2019/01/05/a-new-invite-only-forum-for-victims-of-googles-google-purge).

If I’d been running G+ and had been ordered from “on high” to shut it down, I would have insisted on providing tools to help users migrate their social connections on G+ to other platforms, or at least to email! Google just doesn’t seem to care about the relationships that users have built over the years on G+.

You know what else I’d be doing if I ran G+ at this point? I’d be showing respect for my users. I’d be damned well warning everyone about the upcoming shutdown on a continuing basis — not just with an occasional post on G+ itself visible only to users following that official G+ user, and not relying on third-party media stories to inform the user community.

I’d have “butter bar” banners up keeping all G+ users informed. I’d be sending out emails to users updating them on what’s happening (so far as I know, only G+ API users have been contacted by email about the shutdown).

And with only a few months left until Google pulls the plug on G+, I sure as hell wouldn’t still be soliciting for new  G+ users!

Yep — believe it or not — Google at this time is STILL soliciting for unsuspecting users to sign up for new G+ accounts, without any apparent warnings that you’re signing up for a service that is already officially the walking dead!

Perhaps this shows most vividly how Google today seems to just not give a damn about users who aren’t in their target demographics of the moment. Or maybe it’s just laziness. We can assume that consumer G+ is being operated on an ever thinner skeleton crew these days. Sure, encourage users to waste their time setting up profiles and subscribing to communities that will be ghosts in a handful of weeks. What do we care?

The upshot here though isn’t to suggest that Google is required to operate G+ forever, but rather that the way in which they’ve handled the announcements and ongoing process of sunsetting a service much beloved by many Google users has been nothing short of atrocious, and has not shown respect for Google’s users overall.

And that’s nothing short of very dismal, and very sad indeed.

–Lauren–

The casual outside observer can be readily excused for not noticing the multiplying red flags.

At first glance, so much seems golden for Google.

Google is still expanding its physical infrastructure by leaps and bounds. New buildings, new data centers, new offices — just last week we learned that Google will be taking over virtually the entire old Westside Pavilion for offices here in L.A. I used to hang out there many years ago, back when it was a relatively new shopping mall.

The pipeline of graduating students into Google’s HR machine remains packed to overflowing, and as usual there are vastly more applicants than positions available.

But to those of us with deeper connections to the firm and its employees, there are alarm bells sounding loudly.

Google is in the midst of a user trust and ethics crisis, and an increasing number of their best long-term employees are leaving.

Their reasons vary — after all, nobody is expected to stay with one firm forever, and there are career paths to be considered. 

However, it is undeniable to anyone who really knows Google that there is an increasing internal glumness, a sense of melancholy and in some cases anger, toward some key decisions that management has been making of late, and regarding the predicted trajectory for Google that logically could result.

As at most firms, there has always been some degree of friction at Google between management and the “rank and file” employees — traditionally staying largely internal to the firm and out of public view.

This has changed recently, with a series of controversial internal issues spilling out dramatically into the external world, in the form of employee protests and other employee actions really never seen before in modern Big Tech workplaces. 

Consternation over Google’s links to military projects, a potential censored search project for China, and a massive payout to a high-ranking employee accused of sexual harassment — the world at large has taken note of these issues and more.

Just in the last few days, a major shareholder lawsuit has been filed against Google relating to the sexual harassment case. And coincidentally a couple of days ago, the Arms Control Association named the 4000 Googlers who opposed Google’s contract with the Pentagon’s “Project Maven” as the “Arms Control Person(s) of the Year.”

There have indeed been some positive internal changes at Google resulting from this unprecedented level of employee activism — for example, Google has formalized an important and positive set of AI Principles.

For many Googlers, this has been too little, too late. Particularly among female and LGTBQ employees — but by no means restricted to those groups — the atmosphere at Google is no longer seen as welcoming and ethical. And increasing numbers of Googlers — alarmingly including those who have been at Google for many years, who have been the representatives of Google’s culture at its best, and who have constituted the ethical heart of the company — have left or are about to leave.

And this appears to be only the beginning. I’ve lost count of the Googlers I know who have asked me to keep an ear open for outside positions that fall into their areas of expertise — a bit ironic since I’m always looking for work myself. 

These kinds of situations can be devastating to a firm in the long run, in and of themselves.

They also hand Google’s political and other enemies — the haters and more — political ammunition that can be used against Google not only to the detriment of the firm at a time when Big Tech is increasingly being inappropriately framed as “enemies of the people” by Luddite forces on the left and the right — but to the ultimate detriment of Google’s users and everyone else as well.

Yet compared to Google’s competition — for example firms like Amazon and Microsoft who happily accept military combat contracts, or Apple with its highly problematic actions to help China block open Internet access by removing VPN and other apps — Google’s ethics have traditionally been a cut above the others.

As Google’s brain and ethics drains continue, as more of their best and most principled employees leave, Google’s moral advantage over those other firms is rapidly deteriorating, and the exodus of such employees is always a “canary in the coal mine” warning that something fundamental has gone awry. 

So long as Google management chooses not to directly and effectively address these issues, to not dedicate significant resources toward reclaiming the ethical, user trust, and employee trust high grounds, there is little reason to anticipate a course correction from the increasingly dark path on which Google now appears to be traveling. 

–Lauren–

I’ve been highly critical — to say the least — of the European Union’s insane global censorship regime — “The Right To Be Forgotten” (RTBF) — since well before it became actual, enacted law.

But there’s finally some good news about RTBF — in the form of a formal opinion from EU Advocate General Maciej Szpunar, chief adviser at Europe’s highest court.

I’m not sure offhand when I first began writing about the monstrosity that is RTBF, but a small subset of related posts includes:

The “Right to Be Forgotten”: A Threat We Dare Not Forget (2/2012):
https://lauren.vortex.com/archive/000938.html

Why the “Right To Be Forgotten” is the Worst Kind of Censorship (8/2015):
https://lauren.vortex.com/archive/001119.html

RTBF was always bad, but it became a full-fledged dumpster fire when (as many of us had predicted from the beginning) efforts were made to enforce its censorship demands globally. This gave the EU effectively worldwide censorship powers via RTBF’s “hide the library index cards” approach, creating a lowest common denominator “race to the bottom” of expanding mass, government-directed censorship of search results related to usually completely accurate and still published news and other information items.

In a nutshell, Maciej Szpunar’s opinion — which is not binding but is likely to be a strong indicator of how related final decisions will turn out — is that global application of EU RTBF decisions is usually unreasonable. While he doesn’t rule out the possibility of global “enforcement” in “certain situations” (an aspect that will need to be clarified), it’s obvious that he views routine global enforcement of EU RTBF demands to be untenable. 

This is of course only a first step toward reining in the RTBF monster, but it’s potentially an enormously important one, and we’ll be watching further developments in this arena with great interest indeed.

–Lauren–

Have you ever seen the “10 Things” philosophy page at Google? It’s uplifting. It’s sweet. And in significant respects, it’s as dead as the dodo:

https://www.google.com/about/philosophy.html

Even if it didn’t say so, you’d know that this page has been around at Google for a long, long time, because it still speaks of “doing one thing really, really well” and calls Gmail and Maps “new” products.

By no means is everything on that page now inoperative, but it’s difficult for some sections not to remind one of the classic film “Citizen Kane” where Charles Foster Kane himself rips his own, now “antique” Declaration of Principles to shreds.

Point number one on that nostalgic Google page is of special note: “Focus on the user and all else will follow.”

I would argue that when those words were first written many years ago, Google’s users — and the entire Internet world — were very different from today. By and large, the percentage of non-techies in Google’s user community was much smaller. You didn’t have so many busy non-technical persons, older people, and others for whom technology was not a 24/7 “lifestyle” but who were still very dependent on your services.

And of course, Google’s range of services was much narrower then, and Google services were not such a massive part of so many people’s lives around the world as those services are today.

Google has traditionally been — and still to a significant extent is — something of a “black box” to most users.  Unless you’ve been on the inside, many of its actions seem mysterious and inscrutable. Even being on the inside doesn’t necessarily free one completely of those observations.

While there have been some improvements in some respects, especially in regard to Google’s paid services, overall Google still seems to have something of an “us vs. them” attitude — keep the users at arm’s length — when it comes to the majority of their users, a tendency to wall users off in significant respects. 

Granted, when you have as many users as Google, you can’t provide “white-glove” personalized service to all of them.

But even within the practical range of what could be done to better serve users overall, one senses that Google decreasingly cares about you unless you’re a genuine paying customer, and even then only to the minimal extent required. 

Part of this is likely driven by quite realistic fears of potentially draconian actions by pandering politicians in governments around the planet, and the declining value of traditional online advertising models.

But Google’s at best lackadaisical attitude toward so many of its users is still impossible to justify. Just to note two recent examples that I’ve discussed, why would Google not choose to proactively help Chromecast users whose devices might be hijacked, even if the underlying fault wasn’t actually Google’s? And how can Google justify the sudden and total abandonment of loyal Google+ users who have spent many years building close communities, without even bothering to provide any tools to help those users stay in touch with each other after Google pulls the plug? 

It’s a matter of priorities. And at Google, only a limited number of particular users tend to be a priority.

It goes further of course. Google’s institutional fear of the “Streisand Effect” — reluctance to even mention a problem to avoid risking drawing any attention to it — rises essentially to the level of neurosis.

Google’s continual refusal to give users a truly representative “place at the deliberation table”  through user advocates, or the means to escalate serious dilemmas through ombudspersons or similar roles, are ever more glaring as related issues continue to erupt into public notice, often with significantly negative PR impacts, making Google ever more vulnerable to the whims of opportunistic regulators and politicians.

Some years ago when I was consulting to Google, I was in the office of a significantly high ranking executive at their Mountain View headquarters (one clue to knowing if someone is a significant executive at Google — they have their own office). I was pitching my concepts for roles like ombudspersons, and he was pushing back. Finally, he asked me, “Are you volunteering?”

I thought about it for a few seconds and answered no. A role like that without the actual support of the company would be useless, and it seemed obvious from my meetings that the necessary support for such roles within the company did not exist.

In retrospect, even though I’ve always assumed that his question was really only meant rhetorically, I still wonder if I should have “called his bluff” so to speak and answered in the affirmative. It probably wouldn’t have mattered, but it was an interesting moment.

One way or another, the political “powers that be” today have the long knives out for Google and other Internet-based firms. And I for one don’t want to see Google go the way of DEC and Bell Labs and the long list of other firms that once seemed invincible but now either no longer exist or are mere shadows of their former once-great selves.

Given current trends, I’m unsure if Google — even given the will to do so — can turn this around fast enough to avoid the destructive, toxic, political freight trains headed toward it. Many of my readers frequently suggest to me that even that sentiment is overly optimistic.

We shall see.

–Lauren–

Several weeks ago, in the wake of Google’s shameless and hypocritical abandonment of loyal Google users and communities with the announced rapidly approaching shutdown of consumer Google+ (originally scheduled for August, then — with yet another kick in the teeth to their users — advanced to April based on obviously exaggerated security claims) I created a new private forum to help stay in touch with my own G+ followers.

This was not something that I had anticipated needing to do.

If Google had shown even an ounce of concern for their users’ feelings, and provided the means for the “families” of users created on G+ since its inception to have some way to stay in touch after Google pulls the plug on consumer G+ (to concentrate on expanding their enterprise/business version of G+), I wouldn’t even have had to think about creating a new forum at this stage.

But relying upon Google in these respects — please see: “Can We Trust Google?” (https://lauren.vortex.com/2018/12/10/can-we-trust-google) — is a fool’s errand. Google has made it clear that even their most loyal users can be booted out the door at any time that upper management finds them to be an “inconvenience” in the Google ecosystem, to be swatted like flies. Given Google’s continuing user support and user trust failures in other areas, we all should have seen this coming long ago. In fact, many of us did, but had hoped that we were wrong. 

There have been continuing efforts to find some way in conjunction with Google to keep some of these consumer G+ relationships alive — for example, via the enterprise version of G+. To date, these prospects continue to appear bleak. Google seems to have no respect at all for their consumer G+ users, beyond the absolute minimum of providing a way for users to download their own G+ posting archives.

Since Google clearly cares not about destroying the relationships built up on Google+, and since I have many friends on G+ with whom I don’t want to lose touch (many of which, ironically, are Googlers — great Google employees), I created my own small, new private forum as a way to hopefully avoid total decapitation of these relationships at the hands of Google’s G+ guillotine.

A significant number of my G+ followers have already joined. But I’ve been frequently asked if I would consider opening it up further for other G+ users who feel burned by Google’s upcoming demolition of G+, especially since many G+ users are not finding the currently publicly available alternatives to be appealing, for a range of very good reasons. Facebook is nonstarter for many, and various of the other public alternatives are already infested with alt-right and other forms of trolls who were justifiably kicked off of the mainstream platforms.

So while I am indeed willing to accept invitation requests more broadly from G+ users and other folks who are feeling increasingly without a welcoming social media home, please carefully consider the following before applying.

It’s my private forum. My rules apply. It operates as a (hopefully) benign dictatorship. I reserve the right to reject any invite applications or submitted postings. Any bad behavior (by my definitions) will result in ejection, typically on a one-strike basis. All submitted posts will be moderated (by myself and/or by trusted users whom I designate) before potentially being accepted and becoming visible on the forum. Private messaging between users is not supported at this time. I make no guarantees regarding how long the forum will operate or how it might evolve, but my intention is for it to be a low-key and comfortable place for friends to post and discuss issues of interest.

If you don’t like that kind of environment, then please don’t even bother applying for an invitation. Go use Facebook. Or go somewhere else. Good luck. You’re going to need it.

If you do want to apply for an invitation, please send an email message explaining briefly who you are and why you want to join, to:

g-forum-request@vortex.com

I look forward to hearing from you.

Take care. Be seeing you.

–Lauren–

You may have heard by now that significant numbers of Google’s excellent Chromecast devices — dongles that attach to televisions to display video streams — are being “hijacked” by hackers, forcing attached televisions to display content of the hackers’ choosing. The same exploit permits other tampering with some users’ Chromecasts, including apparently forced reboots, factory resets, and configuration changes. Google Home devices don’t seem to be similarly targeted currently, but they likely are similarly vulnerable.

The underlying technical vulnerability itself has been known for years, and Google has been uninterested in changing it. These devices use several ports for control, and they depend on local network isolation rather than strong authentication for access control.

In theory, if everyone had properly configured Internet routers with bug free firmware, this authentication and control design would likely be adequate. But of course, everyone doesn’t fall into this category.

If those control ports end up accessible to the outside world via unintended port forwarding settings (the UPnP capability in most routers is especially problematic in this regard), the associated devices become vulnerable to remote tampering, and may be discoverable by search engines that specialize in finding and exposing devices in this condition.

Google has their own reasons for not wanting to change the authentication model for these devices, and I’m not going to argue the technical ramifications of their stance right now.

But the manner in which Google has been reacting to this new round of attacks on Chromecast users is all too typical of their continuing user trust failures, others of which I’ve outlined in the recent posts “Can We Trust Google?” (https://lauren.vortex.com/2018/12/10/can-we-trust-google) and “The Death of Google” (https://lauren.vortex.com/2018/10/08/the-death-of-google).

Granted, Chromecast hijacking doesn’t rank at the top of exploits sorted by severity, but Google’s responses to this situation are entirely characteristic of their attitude when faced with such controversies.

To date — as far as I know — Google has simply taken the “pass the buck” approach. In response to media queries about this issue, Google insists that the problem isn’t their fault. They assert that other devices made by other firms can have the same vulnerabilities. They lay the blame on users who have configured their routers incorrectly. And so on.

While we can argue the details of the authentication design that Google is using for these devices, there’s something that I consider to be inarguable: When you blame your users for a problem, you are virtually always on the losing side of the argument.

It’s as if Google just can’t bring itself to admit that anything could be wrong with the Chromecast ecosystem — or other aspects of their vast operating environments.

Forget about who’s to blame for the situation. Instead, how about thinking of ways to assist those users who are being affected or could be affected, without relying on third-party media to provide that kind of help!

Here’s what I’d do if I was making these decisions at Google.

I’d make an official blog post on the appropriate Google blogs alerting Chromecast users to these attacks and explaining how users can check to make sure that their routers are configured to block such exploits. I’d place something similar prominently within the official Chromecast help pages, where many users already affected by the problem would be most likely to initially turn for official “straight from Google” help.

This kind of proactive outreach shouldn’t be a difficult decision for a firm like Google that has so many superlative aspects. But again and again, it seems that Google has some sort of internal compulsion to try minimize such matters and to avoid reaching out to users in such situations, and seems to frequently only really engage publicly in these kinds of  circumstances when problems have escalated to the point where Google feels that its back is against the wall and that they have no other choice.

This isn’t rocket science. Hell, it’s not even computer science. We’re talking about demonstrating genuine respect for your users, even if the total number of users affected is relatively small at Google Scale, even if the problems aren’t extreme, even if the problems arguably aren’t even your fault.

It’s baffling. It’s disturbing. And it undermines overall user trust in Google relating to far more critical issues, to the detriment of both Google itself and Google’s users.

And perhaps most importantly, Google could easily improve this situation, if they chose to do so. No new data centers need be built for this purpose, no new code is required. 

What’s needed is merely the recognition by Google that despite their great technical prowess, they have failed to really internalize the fact that all users matter — even the ones with limited technical expertise — and that Google’s attitude toward those users who depend on their services matters at least as much as the quality of those services themselves. 

–Lauren–

When small, closed minds tackle big issues, the results are rarely good, and frequently are awful. This tends to be especially true when governments attempt to restrict the development and evolution of technology. Not only do those attempts routinely fail at their stated and ostensible purposes, but they often do massive self-inflicted damage along the way, and end up further empowering our adversaries.

Much as Trump’s expensive fantasy wall (“Mexico will pay for it!”) would have little ultimate impact on genuine immigration problems — other than to further exacerbate them — his Commerce department’s new plans for restricting the export of technologies such as AI, speech recognition, natural language understanding, and computer vision would be yet another unforced error that could decimate the USA’s leading role in these areas.

We’ve been down this kind of road before. Years ago, the USA federal government placed draconian restrictions on the export of encryption technologies,  classifying them as a form of munitions. The result was that the rest of the world zoomed ahead in crypto tech. This also triggered famously bizarre situations like t-shirts with encryption source code printed on them being restricted, and the co-inventor of the UNIX operating system — Ken Thompson — battling to take his “Belle” chess-playing computer outside the country, because the U.S. government felt that various of the chips inside fell into this restricted category. (At the time, Ken was reportedly quoted as saying that the only way you could hurt someone with Belle was by dropping it out of a plane — you might kill someone if it hit them!)

As is the case with AI and the other technologies that Commerce is talking about restricting today, encryption R&D information is widely shared among researchers, and likewise, any attempts to stop these new technologies from being widely available, even attempts at restricting access to them by specific countries on our designated blacklist of the moment, will inevitably fail.

Even worse, the reaction of the global community to such ill-advised actions by the U.S. will inevitably tend to put us at a disadvantage yet again, as other countries with more intelligent and insightful leadership race ahead leaving us behind in the dust of politically motivated export control regimes.

To restrict the export of AI and affiliated technologies is shortsighted, dangerous, and will only accomplish damaging our own interests, by restricting our ability to participate fully and openly in these crucial areas. It’s the kind of self-destructive thinking that we’ve come to expect from the anti-science, “build walls” Trump administration, but it must be firmly and completely rejected nonetheless.

–Lauren–

It now seems unlikely that Google will be proceeding anytime soon with their highly controversial “Dragonfly” project to provide Chinese government-controlled censored search services in China. The project has become politically radioactive — odds are that any attempt to move forward would result in overwhelming bipartisan blocking actions by Congress.

But this doesn’t mean that Google can — or that they should — leave China. About 20% of the global population is within Chinese territorial boundaries, well over a billion human beings. Even if it were financially practical to do so (which it isn’t), we cannot ethically abandon them.

Our ethical concerns with China are not with the Chinese people, they’re with the oppressive, dictatorial Chinese government.

In fact, if you ever deal directly with Chinese individuals, you’ll generally find them to be among the greatest folks you’ve ever encountered. Even if your experience is only with the multitude of Chinese-operated stores on eBay, it’s routine to receive superb customer service that puts many U.S.-based firms to shame.

So the dilemma — not just for Google but for all of us in dealing with China — is how to best serve the people of China, without directly supporting China’s totalitarian regime and their escalating and serious mass human rights abuses.

Obviously, it’s impossible to completely compartmentalize these two aspects of the problem, but there are some fairly obvious guidelines that we can apply.

Joint research projects with China — for example, in areas such as machine learning and artificial intelligence — is one category that will generally make sense to pursue, even though we realize that the fruits of such work can be used in negative ways.

But realistically, this is true of most research by humankind throughout history, and joint research projects can at the very least provide valuable insight into important work that might not otherwise be surfaced to domestic researchers.

On the other hand, participation in operational Chinese systems that wage war and/or directly further the oppression of the Chinese people should be absolutely off the table. This is the dangerous category into which Dragonfly would ultimately have resided, because the Chinese government’s vast censorship apparatus is a foundational and crucial aspect of their maintaining oppressive control over their population.

The fact that the vast majority of common queries under Dragonfly might not have been censored is irrelevant to the concerns at hand. It’s those crucial other Dragonfly queries —- censored by order of the Chinese dictators — that would drag this concept deep into an unacceptable ethical minefield.

These are but two examples from a complex array of situations relating to China. Neither Google nor the rest of us can or should disengage from China. But the specific ways in which we choose to work with China are paramount, and it is incumbent on us to assure that such projects always pass reasonable ethical muster.

As usual with so much in life, as the old saying goes (and the Chinese probably said it first) — the devil is in the details.

–Lauren–

If you’re a regular reader of my missives, you know that one of my continuing gripes with Google — going back many years — relates to their continuing failures to devise a system to deal appropriately with user problems in need of support escalation.

I have enormous respect for Google — a great company — but their bullheaded refusal to consider solutions that so many firms have found useful in these regards, such as ombudspersons and user advocates, is a source of continuing deep disappointment.

I’ve written about these issues so very many times over the years that I’m not going to repeat myself here, beyond saying that the usual excuse one hears — that people using free services should expect to get the level of service that they’re paying for — is not an acceptable one for services that have become so integral to so many people’s lives.

But it goes way beyond this. Escalation failures are common even with users of Google’s paid business services, and for major YouTube creators in monetary relationships with Google.

In fact, YouTube-related problems are near the top of the list of why users come to me asking for help with Google issues. Sometimes I can help them, sometimes I can’t. Either way, this isn’t something I should need to be doing from the outside of Google! Google needs to have dedicated employee roles for these escalation tasks.

I won’t here plow again over the ground that I’ve covered in the past regarding YouTube problems with Content ID and false ownership claims, and the desperation of honest YouTube creators who get crunched between the gears of YouTube’s claim/counterclaim machinery.

Rather, I’ll point to a particularly vivid very recent story of a YouTube creator who had his video (monetized with over 47 million views), ripped out from under him by someone with no actual ownership rights, and the Kafkaesque failures of Google to deal with the situation appropriately.

This case is all the more painful since this creator had enough subscribers that he had a YouTube “liaison” (something most YouTube creators don’t have, of course), but YouTube’s procedures failed so badly that even this didn’t help him. I recommend that you watch his video explaining the situation (posted just five days ago, it already has over two million views):

“How my video with 47 million views was stolen on YouTube” – https://www.youtube.com/watch?v=z4AeoAWGJBw 

And keep in mind, as he points out himself, this is far from an isolated kind of case.

Google knows what’s necessary to fix these kinds of situations. You start by hiring an ombudsperson, user advocate, or create some similar dedicated roles with genuine responsibility within the firm.

Google continues to fight these concepts, and the longer that they do so, the more that they risk trust in Google being further diminished and eventually decimated.

–Lauren–

Recently in “Can We Trust Google?” (https://lauren.vortex.com/2018/12/10/can-we-trust-google), I explored the question of whether Google should be considered to be a reliable partner to consumers or businesses, given the manner in which Google all too frequently makes significant changes to their products without documenting associated user interface and other related issues appropriately.

Even worse, Google has a long history of leaving users out in the cold when Google abruptly decides to kill products, often with inadequate or questionable claimed justifications.

Google has taken such actions again and again, most recently with the consumer version of Google+ — whose users represent among Google’s most loyal fans. Today, Google announced that G+ APIs will start to break in January — causing vast numbers of active sites and archives which depend on them for various display elements (including some of my own sites) to turn into graphical garbage without significant and time-consuming modifications.

Meanwhile, Google is speeding ahead with their total shutdown of consumer G+, on their new accelerated schedule that suddenly took months off of their originally announced rapid shutdown timetable.

If this all isn’t enough of a kick in the teeth to Google fans, Google continues extolling the virtues of the new G+ features that they plan for enterprises — for businesses — which apparently will be continuing and expanding even as the consumer side is liquidated.

But I wonder how long enterprise G+ will actually last? So many business people have contacted me noting that they no longer are willing to entrust long-term or mission critical applications to Google, because they just don’t trust that Google can be depended upon to maintain products into the foreseeable future. These entrepreneurs fear that they’re going to end up being ground up in the garbage disposal just like Google’s consumer users so often are, when Google products are pulled out from under them.

This goes far beyond Google+. These issues permeate the way Google treats both consumer and business users — very much as if they were disposable commodities, where only the largest demographic groups mattered at all.

I am a tremendous fan of Google and Googlers. But I’m forced to agree that at present it’s difficult to recommend Google as a stable resource for businesses that need to plan further than relatively short periods into the future. 

For business planning purposes, all of that great Google technology is effectively worthless if you can’t depend on it being stable and still being available even a few short years from now. 

For all the many faults of firms like Microsoft and Amazon — and I’m no friend of either — both of them seem to have learned that businesses need stability above all — a lesson that Google still doesn’t seem to have really internalized.

Both Amazon and Microsoft seem to understand that the ways in which you treat the users of your consumer products will reflect mightily on business’ decisions about adopting your enterprise products and services. For all of their vast technological expertise, Google seems utterly clueless regarding this important fact.

When I mentioned recently that I still believed it possible for Google to turn this situation around, I received a bunch of responses from readers suggesting that I was wrong, that Google will never make the kinds of changes that would truly be necessary.

I will continue to try help folks with Google-related issues to the maximal extent that I can. But I sure hope that my optimistic view regarding Google’s ability to change isn’t proven to be painfully incorrect in the end.

–Lauren–

During a radio interview a few minutes ago, I was asked for my opinion regarding Google CEO Sundar Pichai’s hearing at Congress today. 

There’s a lot that can be said about this hearing. Sundar confirmed that Google does not plan to go ahead with a Chinese government censored search engine — right now. 

Most of the hearing involved the ridiculous, false continuing charges that Google’s search results are politically biased — they’re not.

But relating to that second topic, I heard one of the scariest demands ever uttered by a member of the U.S. Congress.

Rep. Steve King (R-Iowa) wants Google to hand over to Congress the identities of the Googlers whose work relates to search algorithms. King made it clear that he wants to examine these private individuals’ personal social media postings, his direct implication being that showing a political orientation in your personal postings would mean that you’d be incapable of doing your work on search in an unbiased manner.

This is worse than wrong, worse than stupid, worse than lunacy — it’s outright dangerous McCarthyism of the first order.

Everything else that occurred in that hearing pales into insignificance compared with King’s statement.

King continued by threatening Google with various punitive actions if Google refuses to agree to his demand regarding Google employees, and also to turn over the details of how the Google search algorithms are designed — which of course Congress would leak — setting the stage for search to be gamed and ruined by every tech-savvy wacko and crook.

Steve King has a long history of crazy, racist remarks, so it’s no surprise that he also rants into straitjacket territory when it comes to Google as well.

But his remarks today regarding Google were absolutely chilling, and they need to be widely and vigorously condemned in no uncertain terms.

–Lauren–

Recent Google Posts [ 11-Dec-18 5:34pm ]
Can We Trust Google? [ 10-Dec-18 7:04pm ]

I consider Google to be a great company. I have many friends who are Googlers. I am dependent on many Google services and products.

But if you’ve gotten the sense that Google has been flailing around in a seemingly uncoordinated fashion lately, like a chainsaw run wild, you’re not the only one. And I’m not talking right now about their nightmare “Dragonfly” Chinese censorship project or the righteous rising tide of their own employees’ protests.

Let’s talk about the users. Let’s talk about you and me.

Some of Google’s management decisions are chopping Google’s most loyal users to figurative bloody bits.

Google has fantastic engineering teams, world-class privacy and security teams, brilliant lawyers, and so many other wonderful human and technical resources — yet Google’s upper management apparently still hasn’t really grown up.

To put it bluntly, Google management in key respects treats ordinary users like disposable bathroom paper products, to be used and quickly disposed of without significant consideration of the ultimate impacts.

There’s a site out on the Web that calls itself the Google Graveyard — they list all the Google services that have appeared and then unceremoniously vanished over the years, leaving seas of disappointed and upset users in their wake.

Today Google apparently announced that they’re pushing up the death date for consumer Google+ to April. Just recently they said it was going to be next August, so loyal G+ users — and don’t believe the propaganda, there are vast numbers of them — were planning on the basis of that original date. Google is simultaneously citing a new minor G+ security bug and is apparently using that as an excuse. But we know that’s bogus, because Google simultaneously notes that this minor bug only existed for less than a week and there was no evidence of it being exploited.

Google just wants to dump its social media users who aren’t on YouTube. No matter the many years that those users on G+ have spent building up vibrant communities on the platform. We know Google isn’t killing the essential G+ technical infrastructure, since they plan to continue it for their enterprise (paying) customers.

Who knows, maybe Google will next announce that consumer G+ will shut down 48 hours from now.

Let’s face it, you simply cannot depend on Google honorably even sticking to their own service shutdown dates and not pulling the plug earlier — users be damned! Who really cares about the impacts on those users, right?

You want another recent example? Glad you asked! Google over the last handful of days suddenly, and with no notification at all, started removing a feature from Google Voice, causing the way incoming calls are treated by the system to suddenly change for users employing that option in call screening. Because Google didn’t bother to notify any Google Voice users about this in advance, users only found out when their callers started expressing confusion about what was going on. I’m in useful discussions with the Google Voice team about this situation, and Google asserts that most users didn’t choose a mix of options that were affected by this.

But that’s not the point! For those users who did use that option set, this was a big deal, a major disruptive change that they were not told about (and in fact, still have not officially been informed about as far as I know), leaving them no opportunity to take reasonable proactive actions and limit the negative impacts.

The list of similarly affected Google products and services goes on and on.  Google adds and removes features and changes user interfaces without warning, explanation, or frequently even any documentation. They kill off services — used by millions — on short notice, and even when they give a longer notice they may then suddenly chop months from that interval, as they have with G+.

Some might argue that users who don’t pay for Google services shouldn’t expect much more than nuthin’. But that’s garbage.

Vast numbers of persons depend on Google for many aspects of their lives. In many cases, they would happily pay reasonable fees for better support and some guarantees that Google won’t suddenly kill their favorite services! Innumerable people have told me how they’d happily pay to use consumer G+ or Google Voice under those conditions, and the same goes for many other Google services as well.

And yet, except for the limited offerings in “Google One” and media offerings like YouTube and Music premium services, essentially the only other way to pay for standard Google services is through Google’s “G Suite” enterprise model, which is domain-centric and far more appropriate for corporate users than for individuals.

Google knows that as time goes on their traditional advertising revenue model will become decreasingly effective. This is obviously one reason why they’ve been pivoting toward paid service models aimed at businesses and other organizations. That doesn’t just include G Suite, but great products like their AI offerings, Google Cloud, and more.

But no matter how technically advanced those products, there’s a fundamental question that any potential paying user of them must ask themselves. Can I depend on these services still being available a year from now? Or in five years? How do I know that Google won’t treat business users the same ways as they’ve treated their consumer users?

In fact, sadly, I hear this all the time now. Users tell me that they had been planning to move their business services to Google, but after what they’ve seen happening on the consumer side they just don’t trust Google to be a reliable partner going forward.

And I can’t blame folks for feeling this way. As the old saying goes, “Fool me once shame on you, fool me twice shame on me.”

The increasingly shabby way that Google treats consumer users in the respects that I’ve been discussing here has real world impacts on how potential business users view Google.  The fact that Google has been continuing to pull the rug out from under their most loyal consumer users has not been lost on business observers, who know that even though Google’s services are usually technically superior, that fact alone is not enough to trust Google with your business operations.

Google works quite hard it seems to avoid thinking much about these negative impacts. That’s part of the reasons, I believe, why Google fights so hard against filling commonly accepted roles that so many firms have found to be so incredibly useful, such as ombudspersons, ethics officers, and user advocates.

In some ways, Google management still behaves as if Google was still a bunch of PCs stacked up in a garage. They still have not really taken responsibility for their important place in the world.

Personally, I still believe that Google can turn around this situation for the better. However, I am forced to admit that to date, I do not see significant signs of their being willing to take the significant steps and to make the serious changes necessary for this to occur.

–Lauren–

Google’s highly controversial “Dragonfly” project, exploring the possibility of providing Chinese-government censored and controlled search to China, is back in the news — with continuing protests by concerned Google employees, including public letters and other actions.

I have previously explained my opposition to this project and my solidarity with these Googlers, in posts such as: “Google Admits It Has Chinese Censorship Search Plans - What This Means” (https://lauren.vortex.com/2018/08/17/google-admits-it-has-chinese-censorship-search-plans-what-this-means) and other related essays.

There are a multitude of reasons to be skeptical about this project, ranging from philosophical to emotional to economic. Basic issues relating to freedom of speech and individual rights come into play when dealing with an absolute dictatorship that sends people to “reeducation” camps where they are tortured merely for having the “wrong” religions, or where making an “inappropriate” comment on the tightly-controlled Chinese Internet can result in authorities dragging you away to secret prisons.

There is also ample evidence to suggest that if Google proceeds to provide such search services in China, they will be mercilessly attacked by politicians from both sides of the aisle, many of whom already are in the ranks of the Google Haters.

But for the moment, let’s attempt to set such horrors and the politics aside, and look at Dragonfly in the cold, hard logic of available data. Google famously considers itself to be a “data-driven” company. Does the available data suggest that Dragonfly would be practical for Google to implement and operate going forward?

The answer is clearly negative.

Philosopher George Santayana’s notable assertion that: "Those who cannot remember the past are condemned to repeat it" is basically another way of saying “If you ignore the data staring you in the face, don’t be surprised when you get screwed.”

And the data regarding the probability of getting burned, screwed, or otherwise bulldozed by China is plentiful.

Google of course has plenty of specific data in hand about this. They tried providing censored search to China around a decade ago. The result was (as many of us predicated at the time) ever-increasing demands for more censorship and more control from the Chinese government, and then a series of Chinese-based hack attacks against Google itself, causing Google to correctly pull the plug on that project.

Fast forward to today, and Google management seems to be asserting that somehow THIS time it will all be different and work out just fine. Is there any data to suggest that this view is accurate?

Again, the answer is clearly no. In fact, vast evidence suggests exactly the opposite.

The optimistic assertions of Dragonfly proponents might have a modicum of validity if there were any evidence that China has been moving in a positive direction relating to speech and other human rights (in either or both of the technological and non-technological realms) in the years since Google’s original attempt to provide censored Chinese search.

But the data regarding China’s behavior over this period clearly demonstrates China moving in precisely the contrary direction! 

China has used this time not to improve the human rights of its people, but to massively tighten its grip and to escalate its abuses in nightmarish ways. And especially to the point of this discussion, China’s ever more dictatorially monitored and controlled Internet has become a key tool in the government’s campaign of terror.

China has turned the democratic ideals of the Internet’s founders on their heads, and have morphed their own Internet into a bloody bludgeon to use against its own people, and even against Chinese persons living outside of China.

The reality of course is that China is an economic powerhouse — the West has already sold its economic soul to China to a major degree. There is no reversing that in the foreseeable future. Neither threats nor tariffs will make a real difference.

But we still do have some free choice when it comes to China.

And one specific choice — a righteous and honorable choice indeed — is to NOT get into bed with the Chinese dictators’ Internet control and censorship regime.  

Giving the Chinese government dictators any control over Google search results would be effectively tantamount to embracing their horrific abuses — PR releases to the contrary notwithstanding.

The data — the history — teaches us clearly that there is no “just dipping your toe into the water” when it comes to collaboration with unrepentant, dictatorial regimes in the process of extending and accelerating their abuses, as is the case with China. You will not be able to make China behave any “better” through your actions. But you will inevitably be ultimately dragged body and soul into their putrid deeps. 

The data is obvious. The data is devastating. 

Google should immediately end its dance with China over Chinese censored search. Dragonfly and any similar projects should be put out of their miseries for good and all.

–Lauren–

Do you know why Facebook is called Facebook? The name dates back to founder Mark Zuckerberg’s “FaceMash” project at Harvard, designed to display photos of students’ faces (without their explicit permissions) to be compared in terms of physical attractiveness. Essentially, a way he and his friends could avoid dating “ugly” people by his definition. Zuck even toyed with the idea of comparing those student photos with shots of farm animals. 

Immature. Exploitative. Verging on pre-echos of evils to come.

Fast forward to Facebook of today. As we’ve watched Zuckerberg’s baby expand over the years like a mutant virus from science fiction, we’ve had plenty of warnings that the at best amoral attitudes of Zuck and his hand-picked cronies have permeated the Facebook ecosystem. 

It’s long been a given that Facebook ruthlessly controls, limits, and manipulates the data that users are shown — to its own financial advantage. 

But long before we learned of Facebook’s deep embeds in right-wing politics, and the Russians’ own deep manipulative embeds in Facebook, there were other clues that Facebook’s ethical compass was virtually nonexistent.

Remember when it was discovered that Facebook was manipulating information shown to specific sets of users to see if their emotional states could be altered by such machinations without their knowledge? 

Over and over again, Facebook has been caught in misstatements, in subterfuge, in outright lies — including the recent revelations of their paying an outside PR hit firm to fabricate attack pieces on other firms to divert attention from Facebook’s own spreading problems, even to the extent of the firm reportedly spreading false antisemitic conspiracy theories.

Zuck and Chief Operating Officer Sheryl Sandberg found an outgoing employee to fall on his sword to take official responsibility for this, and initially both Zuck and Sheryl publicly disclaimed any knowledge of that outside firm’s actions. But now Sheryl has apparently reversed herself, admitting that information about the firm did reach her desk. And do you really believe that control freaks like Mark Zuckerberg and Sandberg weren’t being kept informed about this in some manner all along? C’mon!

Facebook of course is not the only large Internet firm with ethical challenges. Recently in “The Death of Google” (https://lauren.vortex.com/2018/10/08/the-death-of-google), and “After the Walkout, Google's Moment of Truth” (https://lauren.vortex.com/2018/11/03/after-the-walkout-googles-moment-of-truth), I noted Google’s own ethical failings of late, and my suggestions for making Google a better Google. Importantly, those posts were not predicting Google’s demise, but rather were proposing means to help Google avoid drifting further from the admirable principles of its founding (“organizing and making available the world’s information” — in sharp contrast to Facebook’s seminal “avoid dating ugly people” design goal).  So both of those posts regarding Google were in the manner of Dickens’  “Ghost of Christmas Future” — a discussion of bad outcomes that might be, not that must be.  

Saving Google is a righteous and worthy goal.

Not so Facebook. Facebook’s business model is and has always been fundamentally rotten to its core, and the more that this core has been exposed to the public, the more foul the stench of rotten decay that Facebook emits.

“Saving” Facebook would mean helping to perpetuate the sordid, manipulative mess of Facebook today, that reaches back to its very beginnings — a creation that no longer deserves to exist.

In theory, Facebook could change its ways in positive directions, but not without abandoning virtually everything that has characterized Facebook since its earliest days. 

And there is no indication — zero, none, nil — that Zuckerberg has any intention of letting that happen to his self-made monster.

So in the final analysis — from an ethical standpoint at least — there is no point to trying to “save” Facebook — not from regulators, not from politicians, and certainly not from itself. 

The likely end of Facebook as we know it today will not come tomorrow, or next month, or even perhaps over a short span of years. 

But the die has been cast, and nothing short of a miracle will save Facebook in the long run. And whether or not you believe in miracles, Facebook doesn’t deserve one.

–Lauren–

Some new studies are quantifying the levels of toxic emissions from conventional 3D printers using conventional plastic filaments of various types. The results are not particularly encouraging, but are not a big surprise. They are certainly important to note, and since I’ve discussed the usefulness of 3D printing many times in the past, I wanted to pass along some of my thoughts regarding these new reports. (Gizmodo’s summary is here: https://gizmodo.com/new-study-details-all-the-toxic-particles-spewed-out-by-3d-p-1830379464).

The big takeaways are pretty much in line with what we already knew (or at least suspected), but add some pretty large exclamation points.

PLA filament generally produces far fewer toxic emissions than most other filament compositions (especially ABS), and is what I would almost always recommend using in the vast majority of cases.

The finding that inexpensive filaments tend to have more emissions than “name brands” is interesting, probably related to levels of contaminants in the raw filament ingredients. However, in practice filament has become so fungible — with manufacturers putting different brand names on the same physical filament from the same factories — it’s often difficult to really know if you’re definitely buying the filament that you think you are. And of course, the most widely used filaments tend to be among the most inexpensive.

My own recommendation has always been to never run a 3D printer that doesn’t have its own enclosed build area air chamber (which the overwhelming vast majority don’t) in a room routinely occupied by people or animals — print runs can take many hours and emissions are continuing the entire time. Printing outside isn’t typically practical due to air currents and sudden temperature changes. A generally good location for common “open” printers is a garage, ideally with a ventilation fan.

The reported fact that filament color affects emissions is not unexpected — there has long been concern about the various additives that are used to create these colors. Black filament is probably the worst case, since it tends to have all sorts of leftover filament scraps and gunk thrown into the mix — the fact that black filament tends to regularly clog 3D printers is another warning sign.

Probably the safest choice overall when specific colors aren’t at issue, is to print with “natural color” (whitish, rather transparent) PLA filament, which tends to have minimum additives. It also is typically the easiest and most reliable to print with, probably for that same reason.

The finding that there is a “burst” of aerosol emissions when printing begins is particularly annoying, since it’s when printing is getting started that you tend to be most closely inspecting the process looking for early print failures.

So the bottom line is pretty much what you’d expect — breathing the stuff emanating from molten plastic isn’t great for you. Then again, even though it only heated the plastic sheets for a few minutes at a time (as opposed to the hours-long running times of modern 3D printers), I loved my old Mattel “VAC-U-FORM” when I was a kid — and who knows how toxic the plastics heated in that beauty really were (https://www.youtube.com/watch?v=lCvgvWiZNe8). Egads, not only can you still get them on eBay, replacement parts and plastic refill packs are still being sold as well!

I guess that they got it right in the “The Graduate” after all: https://www.youtube.com/watch?v=Dug-G9xVdVs

Be seeing you.

–Lauren–

UPDATE (November 22, 2018): Save Google — but Let Facebook Die

– – –

Google has reached what could very well be an existential moment of truth in its corporate history.

The recent global walkout of Google employees and contractors included more than 20,000 participants by current counts, and the final numbers are almost certain to be even higher. This puts total participation at something north of 20% of the entire firm — a remarkable achievement by the organizers.

Almost a month ago, when I posted my concerns regarding the path that this great company has been taking, and the associated impacts on both their employees and users (“The Death of Google” – https://lauren.vortex.com/2018/10/08/the-death-of-google), the sexual assault and harassment issues that were the proximate trigger for the walkout were not yet known publicly — not even to most Googlers.

These newly reported management failures clearly fit tightly into the same pattern of longstanding issues that I’ve frequently noted, and various broad concerns related to Google’s accountability and transparency that have been cited as additional foundational reasons for the walkout.

Google today — almost exactly twenty years since its founding — is at a crossroads. The decisions that management makes now regarding the issues that drove the walkout and other issues of concern to Googlers, Google’s users, and the world at large, will greatly impact the future success of the firm, or even how long into the future Google will continue to exist in a recognizable form at all.

That so many of these issues have reached the public sphere at around the same time — sexual abuse and harassment, Googlers’ concerns about military contracts and a secret project aimed at providing Chinese-government censored search, and more — should not actually be a surprise.

For all of these matters are symptomatic of larger problematic ethical factors that have crept into Google’s structure, and without a foundational change of direction in this respect, new concerns will inevitably keep arising, and Google will keep lurching from crisis to crisis.

The walkout organizers will reportedly be meeting with Google CEO Sundar Pichai imminently, and I fully endorse the organizers’ publicly stated demands.

But management deeds are needed — not just words. After a demonstration of this nature, it’s all too easy for conciliatory statements to not be followed by concrete and sustained actions, and then for the original status quo to reassert itself over time.

This is also a most appropriate moment for Google to act on a range of systemic factors that have led to transparency, accountability, and other problems associated with Google management’s interactions with rank-and-file employees, and between Google as a whole and its users. 

Regarding the latter point, since I’ve many times over the years publicly outlined my thoughts regarding the need for Google employees dedicated to roles such as ombudsperson, user advocates, and ethics officer (call the latter “Guardian of Googleyness” if you prefer), I won’t detail these crucial positions again here now. But as the walkout strongly suggests, these all are more critically needed by Google than ever before, because they all connect back to the basic ethical issues at the core of many concerns regarding Google.

These are all interconnected and interrelated matters, and attempts to improve any of them in isolation from the others will ultimately be like sweeping dirt under the proverbial rug — such problems are pretty much guaranteed to eventually reemerge with even more serious negative consequences down the line.

Google is indeed a great company. No firm can be better than its employees, and Google’s employees — a significant number of whom I know personally — have through their walkout demonstrated to the world something that I already knew about them. 

Googlers care deeply about Google. They want it to be the best Google that it possibly can be, and that means meeting high ethical standards vertically, horizontally, and from A to Z.

Now it’s Google’s management’s turn. Can they demonstrate to their employees, to Google’s users, and to the global community, that loyalty towards Google has not been misplaced?

We shall see.

–Lauren–

05-Mar-22

The controversy over the recently announced decision by YouTube to remove publicly viewable “Dislike” counts from all videos is continuing to grow. Many YT creators feel that the loss of a publicly viewable Like/Dislike ratio will be a serious detriment. I know that I consider that ratio useful.

There are some good arguments by Google/YouTube for this action, particularly relating to harassment campaigns targeting the Dislikes on specific videos. However, I believe that YouTube has gone too far in this instance, when a more nuanced approach would be preferable.

In particular, my view is that it is reasonable to remove the publicly viewable Dislike counts from videos by default, but that creators should be provided with an option to re-enable those counts on their specific videos (or on all of their videos) if they wish to do so.

With YouTube removing the counts by default, YouTube creators who are not aware of these issues will be automatically protected. But creators who feel that showing Dislike counts is good for them could opt to display them. Win-win!

–Lauren–

Apple Backdoors Itself [ 06-Aug-21 3:35pm ]

UPDATE (September 3, 2021): Apple has now announced that “based on feedback” they are delaying the launch of this project to “collect input and make improvements” before release.

– – –

Apple’s newly revealed plan to scan users’ Apple devices for photos and messages related to child abuse is actually fairly easy to explain from a high-level technical standpoint.

Apple has abandoned their “end-to-end” encrypted messaging promises. They’re gone. Poof! Flushed down the john. Because a communication system that supposedly is end-to-end encrypted — but has a backdoor built into user devices — is like being sold a beautiful car and discovering after the fact that it doesn’t have any engine. It’s fraudulent.

The depth of Apple’s betrayal of its users is not specifically in the context of dealing with child abuse — which we all agree is a very important issue indeed — but that by building any kind of backdoor mechanism into their devices they’ve opened the legal door to courts and other government entities around the world to make ever broader demands for secret, remote access to the data on your Apple phones and other devices. And even if you trust your government today with such power — imagine what a future government in whom you have less faith may do.

In essence, Apple has given away the game. It’s as if you went into a hospital to have your appendix removed, and when you awoke you learned that they also removed one of your kidneys and an eye. Surprise!

There is no general requirement that Apple (or other firms) provide end-to-end crypto in their products. But Apple has routinely proclaimed itself to be a bastion of users’ privacy, while simultaneously being highly critical of various other major firms’ privacy practices. 

That’s all just history now, a popped balloon. Apple hasn’t only jumped the shark, they’ve fallen into the water and are sinking like a stone to the bottom.

–Lauren–

As the COVID “Delta” variant continues its spread around the globe, the Biden administration has deployed something of a basketball-style full-court press against misinformation on social media sites. That its intentions are laudable is evident and not at issue. Misinformation on social media and in other venues (such as various cable “news” channels), definitely play a major role in vaccine hesitancy — though it appears that political and peer allegiances play a significant role in this as well, even for persons who have accurate information about the available vaccines.

Yet good intentions by the administration do not necessarily always translate into optimum statements and actions, especially in an ecosystem as large and complex as social media. When President Biden recently asserted that Facebook is “killing people” (a statement that he later walked back) it raised many eyebrows both in the U.S. and internationally.

I implied above that the extent to which vaccine misinformation (as opposed to or in combination with other factors) is directly related to COVID infections and/or deaths is not a straightforward metric. But we can still certainly assert that Facebook has traditionally been an enormous — likely the largest — source of misinformation on social media. And it is also true, as Facebook strongly retorted in the wake of Biden’s original remark, that Facebook has been working to reduce COVID misinformation and increase the viewing of accurate disease and vaccine information on their platform. Other firms such as Twitter and Google have also been putting enormous resources toward misinformation control (and its subset of “disinformation” — which is misinformation being purposely disseminated with the knowledge that it is false).

But for those both inside and outside government who assert that these firms “aren’t doing enough” to control misinformation, there are technical realities that need to be fully understood. And key among these is this: There is no practical way to eliminate all misinformation from these platforms. It is fundamentally impossible without preventing ordinary users from posting content at all — at which point these platforms wouldn’t be social media any longer.

Even if it were possible for a human moderator (or humans in concert with automated scanning) to pre-moderate every single user posting before permitting them to be seen and/or shared publicly, differences in interpretation (“Is this statement in this post really misinformation?”), errors, and other factors would mean that some misinformation is bound to spread — and that can happen very quickly and in ways that would not necessarily be easily detected either by human moderators or by automated content scanning systems. But this is academic. Without drastically curtailing the amount of User Generated Content (UGC) being submitted to these platforms, such pre-moderation models are impractical.

Some other statements from the administration also triggered concerns. The administration appeared to suggest that the same misinformation standards should be applied by all social media firms — a concept that would obviously eliminate the ability of the Trust & Safety teams at these firms to make independent decisions on these matters. And while the administration denied that it was dictating to firms what content should be removed as misinformation, they did say that they were in frequent contact with firms about perceived misinformation. Exactly what that means is uncertain. The administration also said that a short list of “influencers” were responsible for most misinformation on social media — though it wasn’t really apparent what the administration would want firms to do with that list. Disable all associated accounts? Watch those accounts more closely for disinformation? I certainly don’t know what was meant.

But the fundamental nature of the dilemma is even more basic. For governments to become involved at all in social media firms’ decisions about misinformation is a classic slippery slope, for multiple reasons.

Even if government entities are only providing social media firms with “suggestions” or “pointers” to what they believe to be misinformation, the oversized influence that these could have on firms’ decisions cannot be overestimated, especially when some of these same governments have been threatening these same firms with antitrust and other actions.

Perhaps of even more concern, government involvement in misinformation content decisions could potentially undermine the currently very strong argument that these firms are not subject to First Amendment considerations, and so are able to make their own decisions about what content they will permit on their platforms. Loss of this crucial protection would be a big win for those politicians and groups who wish to prevent social media firms from removing hate speech and misinformation from their platforms. So ironically, government involvement in suggesting that particular content is misinformation could end up making it even more difficult for these firms to remove misinformation at all!

Even if you feel that the COVID crisis is reason enough to endorse government involvement in social media content takedowns, please consider for a moment the next steps. Today we’re talking about COVID misinformation. What sort of misinformation — there’s a lot out there! — will we be talking about tomorrow? Do we want the government urging content removal about various other kinds of misinformation? How do we even define misinformation in widely different subject areas?

And even if you agree with the current administration’s views on misinformation, how do you know that you will agree with the next administration’s views on these topics? If you want the current administration to have these powers, will you be agreeable to potentially a very different kind of administration having such powers in the future? The previous administration and the current one have vastly diverging views on a multitude of issues. We have every reason to expect at least some future administrations to follow this pattern.

The bottom line is clear. Even with the best of motives, governments should not be involved in content decisions involving misinformation on social media. Period.

–Lauren–

Ransomware is currently a huge topic in the news. A crucial gasoline pipeline shuts down. A major meat processor is sidelined. It almost feels as if there are new announced ransomware attacks every few days, and there are certainly many such attacks that are never made public.

We see commentators claiming that ransomware attacks are the software equivalent of 9/11, and that perpetrators should be treated as terrorists. Over on one popular right-wing news channel, a commentator gave a literal “thumbs up” to the idea that ransomware perpetrators might be assassinated.

The Biden administration and others are suggesting that if Russia’s Putin isn’t responsible for these attacks, he at least must be giving his tacit approval to the ones apparently originating there. For his part, Putin is laughing off such ideas.

There clearly is political hay to be made from linking ransomware attacks to state actors, but it is certainly true that ransomware attacks can potentially have much the same devastating impacts on crucial infrastructure and operations as more “traditional” cyberattacks.

And while it is definitely possible for a destruction-oriented cyberattack to masquerade as a ransomware attack, it is also true that the vast majority of ransomware attacks appear to be aimed not at actually causing damage, but for the rather more prosaic purpose of extorting money from the targeted firms.

All this having been said, there is actually a much more alarming bottom line. The vast majority of these ransomware attacks are not terribly sophisticated in execution. They don’t need to depend on armies of top-tier black-hat hackers. They usually leverage well-known authentication weaknesses, such as corporate networks accessible without robust 2-factor authentication techniques, and/or firms’ reliance on outmoded firewall/VPN security models.

Too often, we see that a single compromised password gives attackers essentially unlimited access behind corporate firewalls, with predictably dire results.

The irony is that the means to avoid these kinds of attacks are already available — but too many firms just don’t want to make the efforts to deploy them. In effect, their systems are left largely exposed — and then there’s professed surprise when the crooks simply saunter in! There are hobbyist forums on the Net, having already implemented these security improvements, that are now actually better protected than many major corporations!

I’ve discussed the specifics many times in the past. The use of 2-factor (aka 2-step) authentication can make compromised username/password combinations far less useful to attackers. When FIDO/U2F security keys are properly deployed to provide this authentication, successful fraudulent logins tend rapidly toward nil.

Combining these security key models with “zero trust” authentication, such as Google’s “BeyondCorp” (https://cloud.google.com/beyondcorp), and security is even further enhanced, since no longer can an attacker simply penetrating a firewall or compromised VPN find themselves with largely unfettered access to targeted internal corporate resources.

These kinds of security tools are available immediately. There is no need to wait for government actions or admissions from Putin! And sooner rather than later, firms and institutions that continue to stall on deploying these kinds of security methodologies will likely find themselves answering ever more pointed questions from their stockholders or other stakeholders, demanding to know why these security improvements weren’t already made *before* these organizations were targeted by new highly publicized ransomware attacks!

–Lauren–

While we’re all still reeling from the recent horrific, tragic. and utterly preventable incidents of mass shooting murders, inside the D.C. beltway today events are taking place that could put innumerable medically challenged Americans at deep risk — and the culprit is Louis DeJoy, the Postal Service (USPS) Postmaster General and Trump megadonor. 

His 10-year plan for destroying the USPS, by treating it like his former for-profit shipping logistics business rather than the SERVICE is was intended to be — was released today, along with a flurry of self-congratulatory official USPS tweets that immediately attracted massive negative replies, most of them demanding that DeJoy be removed from his position. Now. Right now!

I strongly concur with this sentiment.

Even as first class and other mail delays have already been terrifying postal customers dependent on the USPS for critical prescription medications and other crucial products, DeJoy’s plan envisions even longer mail delays — including additional days of delay for delivery of local first class mail, banning first class mail from air shipping, raising rates, cutting back on post office hours, and — well, you get the idea.

Fundamentally the plan is simple. Destroy the USPS via the “death by a thousand cuts” — leaving to slowly twist in the wind those businesses and individuals without the wherewithal to rely on much more expensive commercial carriers.

While President Biden has taken some initial steps regarding the USPS by appointing several new appointees to the USPS board of governors (who need to be confirmed by the Senate), and this could lead to the ability for the ultimate ousting of DeJoy (since only the board can fire him directly), we do not have the time for this process to play out.

Biden has apparently been reluctant to take the “nuclear option” of firing DeJoy’s supporters on the board — they can be fired “for cause” — but many observers assert that their complicity in this DeJoy plan to wreck USPS services would be cause enough.

One thing is for sure. The kinds of changes that DeJoy is pushing through would be expensive and time consuming to unwind later on. And in the meantime, everybody — businesses and ordinary people alike — will suffer greatly at DeJoy’s hands. 

President Biden should act immediately to take any and all legal steps to get DeJoy out of the USPS before DeJoy can do even more damage to us all.

–Lauren–

As it stands right now, major news organizations — in league with compliant politicians around the world — seem poised to use the power of their national governments to take actions that could absolutely destroy the essentially open Web, as we’ve known it since Sir Tim Berners-Lee created the first operational web server and client browser at CERN in 1990.

Australia — home of the right-wing Rupert Murdoch empire — is in the lead of pushing this nightmarish travesty, but other countries around the world are lining up to join in swinging wrecking balls at Web users worldwide. 

Large Internet firms like Facebook and Google, feeling pressure to protect their income streams more than to protect their users, are taking varying approaches toward this situation, but the end result will likely be the same in any case — users get the shaft.

The underlying problem is that news organizations are now demanding to be paid by firms like Google and Facebook merely for being linked from them. The implications of this should be obvious — it creates the slippery slope where more and more sites of all sorts around the world would demand to be paid for links, with the result that the largest, richest Internet firms would likely be the last ones standing, and competition (along with choices available to users) would wither away. 

The current situation is still in considerable flux — seemingly changing almost hour by hour — but the trend lines are clear. Google had originally taken a strong stance against this model, rightly pointing out how it could wreck the entire concept of open linking across the Web, the Web’s very foundation! But at the last minute, it seems that Google lost its backbone, and has been announcing payoff deals to Murdoch and others, which of course will just encourage more such demands. At the moment Facebook has taken the opposite approach, and has literally cut off news from their Australian users. The negative collateral effects that this move has created make it unlikely that this can be a long-term action.

But what we’re really seeing from Facebook and Google (and other large Internet firms who are likely to be joining their ranks in this respect) — despite their differing approaches at the moment — is essentially their floundering around in a kind of desperation. They don’t really want (and/or don’t know how) to address the vast damage that will be done to the overall Web by their actions, beyond their own individual ecosystems. From a profit center standpoint this arguably makes sense, but from the standpoint of ordinary users worldwide it does not.

To use the vernacular, users are being royally screwed, and that screwing has only just begun.

Some observers of how the news organizations and their government sycophants are pushing their demands have called these actions blackmail. There is one universal rule when dealing with blackmailers — no matter how much you pay them, they’ll always come back demanding more. In the case of the news link wars, the end result if the current path is continued, will be their demands for the entire Web — users be damned.

–Lauren–

Claims of “cancel culture” seems to be everywhere these days. Almost every day, we seem to hear somebody complaining that they have been “canceled” from social media, and pretty much inevitably there is an accompanying claim of politically biased motives for the action.

The term “cancel culture” itself appears to have been pretty much unknown until several years ago, and seems to have morphed from the term “call-out culture” — which ironically is generally concerned with someone getting more publicity than they desire, rather than less.

Be that as it may, cancel culture complaints — the lions’ share of which emanate from the political right wing — are now routinely used to lambaste social media and other Internet firms, to assert that their actions are based on political statements with which the firms do not agree and (according to these accusations) seek to suppress.

However, even a casual inspection of these claims suggest that the actual issues in play are hate speech, violent speech, and dangerous misinformation and disinformation — not political viewpoints, and formal studies reinforce this observation, e.g. False Accusation: The Unfounded Claim that Social Media Companies Censor Conservatives.

Putting aside for now the fact that the First Amendment does not apply to other than government actions against speech, even a cursory examination of the data reveals — confirmed by more rigorous analysis — not only that right-wing entities are overwhelmingly the source of most associated dangerous speech (though they are by no means the only source, there are sources on the left as well), but conservatives overall still have prominent visibility on social media platforms, dramatically calling into question the claims of “free speech” violations overall.

Inexorably intertwined with this are various loud, misguided, and dangerous demands for changes to (and in some cases total repeal of) Communications Decency Act Section 230, the key legislation that makes all forms of Internet UGC — User Generated Content — practical in the first place.

And here we see pretty much equally unsound proposals (largely completely conflicting with each other) from both sides of the political spectrum, often apparently based on political motives and/or a dramatic ignorance of the negative collateral damage that would be done to ordinary users if such proposals were enacted.

The draconian penalties associated with various of these proposals — aimed at Internet firms — would almost inevitably lead not to the actually desired goals of the right or left, but rather to the crushing of ordinary Internet users, by vastly reducing (or even eliminating entirely) the amount of their content on these platforms — that is, videos they create, comments, discussion forms, and everything else users want to share with others.

The practical effect of these proposals would be not to create more free speech or simply reduce hate and violent speech, misinformation and disinformation, but to make it impractical for Internet platforms to support user content — which is vast in scale beyond the imagination of most persons — in anything like the ways it is supported today. The risks would just be too enormous, and methodologies to meet the new demanded standards — even if we assume the future deployment of advanced AI systems and vast new armies of proactive moderators — do not exist and likely could never exist in a practical and affordable manner.

This is truly one of those “be careful what you wish for” moments, like asking the newly-released genie to “fix social media” and with a wave of his hand he eliminates the ability of anyone in the public — prominent or not, on the right or the left — to share their views or other content.

So as we see, complaints about social media are being driven largely by highly political arguments, but in reality invoke enormously complex technical challenges at gigantic scales — many of which we don’t even fundamentally understand given the toxic political culture of today.

As much as nobody would likely argue that Section 230 is perfect, I have yet to see any realistic proposals to change it that would not make matters far worse — especially for ordinary users who largely don’t understand how much they have to lose in these battles. 

Like democracy itself, which has been referred to as “the worst possible system of governance, except for all the others” — buying into the big lie of cancel culture and demands to alter Section 230 is wrong for the Internet and would be terrible for its users.

–Lauren–

I increasingly suspect that the days of large-scale public distribution of unmoderated UGC (User Generated Content) on the Internet may shortly begin drawing to a close in significant ways. The most likely path leading to this over time will be a combination of steps taken independently by social media firms and future legislative mandates.

Such moderation at scale may follow the model of AI-based first-level filtering, followed by layers of human moderators. It seems unlikely that today’s scale of postings could continue under such a moderation model, but future technological developments may well turn out to be highly capable in this realm.

Back in 1985 when I launched my “Stargate” experiment to broadcast Usenet Netnews over the broadcast television vertical blanking interval of national “Superstation WTBS,” I decided that the project would only carry moderated Usenet newsgroups. Even more than 35 years ago, I was concerned about some of the behavior and content already beginning to become common on Usenet. My main related concerns back then did not involve hate speech or violent speech — which were not significant problems on the Net at that point — but human nature being what it is I felt that the situation was likely to get much worse rather than better.

What I had largely forgotten in the decades since then though, until I did a Google search on the topic today (a great deal of original or later information on Stargate is still online, including various of my relevant messages in very early mailing list archives that will likely long outlive me), is the level of animosity about that decision that I received at the time. My determination for Stargate to only carry moderated groups triggered cries of “censorship,” but I did not feel that responsible moderation equated with censorship — and that is still my view today.

And now, all these many years later, it’s clear that we’ve made no real progress in these regards. In fact, the associated issues of abuse of unmoderated content in hateful and dangerous ways makes the content problems that I was mostly concerned about back then seem like a soap bubble popping, compared with a nuclear bomb detonating now.

We must solve this. We must begin serious and coordinated work in this vein immediately. And my extremely strong preference is that we deal with these issues together as firms, organizations, customers, and users — rather than depend on government actions that, if history is any guide, will likely do enormous negative collateral damage.

Time is of the essence.

–Lauren–

The post below was originally published on 10 August 2019. In light of recent events, particularly the storming of the United States Capital by a violent mob — resulting in five deaths — and subsequent actions by major social media firms relating to the exiting President Donald Trump (terms of service enforcement actions by these firms that I do endorse under these extraordinary circumstances), I feel that the original post is again especially relevant. While the threats of moves by the Trump administration against  CDA Section 230 are now moot, it is clear that 230 will be a central focus of Congress going forward, and it’s crucial that we all understand the risks of tampering with this key legislation that is foundational to the availability of responsible speech and content on the Internet. –Lauren–

– – – – – – – – –  –

The Right’s (and Left’s) Insane Internet Content Power Grab
(10 August 2019)

Rumors are circulating widely — and some news sources claim to have seen actual drafts — of a possible Trump administration executive order aimed at giving the government control over content at large social media and other major Internet platforms. 

This effort is based on one of the biggest lies of our age — the continuing claims mostly from the conservative right (but also from some elements of the liberal left) that these firms are using politically biased decisions to determine which content is inappropriate for their platforms. That lie is largely based on the false premise that it's impossible for employees of these firms to separate their personal political beliefs from content management decisions.

In fact, there is no evidence of political bias in these decisions at these firms. It is completely appropriate for these firms to remove hate speech and related attacks from their platforms — most of which does come from the right (though not exclusively so). Nazis, KKK, and a whole array of racist, antisemitic, anti-Muslim, misogynistic, and other violent hate groups are disproportionately creatures of the political right wing. 

So it is understandable that hate speech and related content takedowns would largely affect the right — because they're the primary source of these postings and associated materials. 

At the scales that these firms operate, no decision-making ecosystem can be 100% accurate, and so errors will occur. But that does not change the underlying reality that the "political bias" arguments are false. 

The rumored draft Trump executive order would apparently give the FCC and FTC powers to determine if these firms were engaging in "inappropriate censorship" — the primary implied threat appears to be future changes to Section 230 of the Communications Decency Act, which broadly protects these (and other) firms and individuals from liability for materials that other parties post to their sites. In fact, 230 is effectively what makes social media possible in the first place, since without it the liability risks of allowing users to post anything publicly would almost certainly be overwhelming. 

But wait, it gets worse!

At the same time that these political forces are making the false claims that content is taken down inappropriately from these sites for political purposes, governments and politicians are also demanding — especially in the wake of recent mass shootings — that these firms immediately take down an array of violent postings and similar content. The reality that (for example) such materials may be posted only minutes before shootings occur, and may be widely re-uploaded by other users in an array of formats after the fact, doesn't faze the politicians and others making these demands, who apparently either don't understand the enormous scale on which these firms operate, or simply don't care about such truths when they get in the way of politicians' political pandering.

The upshot of all this is an insane situation — demands that offending material be taken down almost instantly, but also demands that no material be taken down inappropriately. Even with the best of AI algorithms and a vast human monitoring workforce, these dual demands are in fundamental conflict. Individually, neither are practical. Taken together, they are utterly impossible.

Of course, we know what's actually going on. Many politicians on both the right and left are desperate to micromanage the Net, to control it for their own political and personal purposes. For them, it's not actually about protecting users, it's mostly about protecting themselves. 

Here in the U.S., the First Amendment guarantees that any efforts like Trump's will trigger an orgy of court battles. For Trump himself, this probably doesn't matter too much — he likely doesn't really care how these battles turn out, so long as he's managed to score points with his base along the way. 

But the broader risks of such strategies attacking the Internet are enormously dangerous, and Republicans who might smile today about such efforts would do well to imagine similar powers in the hands of a future Democratic administration. 

Such governmental powers over Internet content are far too dangerous to be permitted to the administrations of any party. They are anathema to the very principles that make the Internet great. They must not be permitted to take root under any circumstances.

-Lauren-

27-Apr-20

Views: 15

Everyone, I hope you and yours are safe and well during this unprecedented pandemic.

As I write this, various governments are rushing to implement — or have already implemented — a wide range of different smartphone apps purporting to be for public health COVID-19 “contact tracing” purposes. 

The landscape of these is changing literally hour by hour, but I want to emphasize MOST STRONGLY that all of these apps are not created equal, and that I urge you not to install various of these unless you are required to by law — which can indeed be the case in countries such as China and Poland, just to name two examples.

Without getting into deep technical details here, there are basically two kinds of these contact tracing apps. The first is apps that send your location or other contact-related data to centralized servers (whether the data being sent is claimed to be “anonymous” or not). Regardless of promised data security and professed limitations on government access to and use of such data, I do not recommend voluntarily choosing to install and/or use these apps under any circumstances.

The other category of contact tracing apps uses local phone storage and never sends your data to centralized servers. This is by far the safer category in which resides the recently announced Apple-Google Bluetooth contact tracing API, being adopted in some countries (including now in Germany, which just announced that due to privacy concerns it has changed course from its original plan of using centralized servers).

In general, installing and using local storage contact tracing apps presents a vastly less problematic and risky situation compared with centralized server apps.

Even if you personally have 100% faith that your own government will “do no wrong” with centralized server contact tracing apps — either now or in the future under different leadership — keep in mind that many other persons in your country may not be as naive as you are, and will likely refuse to install and/or use centralized server contact tracing apps unless forced to do so by authorities.

Very large-scale acceptance and use of any contact tracing apps are necessary for them to be effective for genuine pandemic-related public health purposes. If enough people won’t use them, they are essentially worthless for their purported purposes.

As I have previously noted, various governments around the world are salivating at the prospect of making mass surveillance via smartphones part of the so-called “new normal” — with genuine public health considerations as secondary goals at best.

We must all work together to bring the COVID-19 disaster to an end. But we must not permit this tragic situation to hand carte blanche permissions to governments to create and sustain ongoing privacy nightmares in the process. 

Stay well, all.

–Lauren–

18-Mar-20

Views: 28

As vast numbers of people are suddenly working from home in reaction to the coronavirus pandemic, doctors switch to heavy use of video office visits, and in general more critical information than ever is suddenly being thrust onto the Internet, the risks of major security and privacy disasters that will long outlast the pandemic are rising rapidly. 

For example, the U.S. federal government is suspending key aspects of medical privacy laws to permit use of “telemedicine” via commercial services that have never been certified to be in compliance with the strict security and privacy rules associated with HIPAA (Health Insurance Portability and Accountability Act). The rush to provide more remote access to medical professionals is understandable, but we must also understand the risks of data breaches that once having occurred can never be reversed.

Sloppy computer security practices that have long been warned against are now coming home to roost, and the crooks as usual are way ahead of the game.  

The range of attack vectors is both broad and deep. Many firms have never prepared for large-scale work at home situations, and employees using their own PCs, laptops, phones, or other devices to access corporate networks can represent a major risk to company and customer data. 

Fake web sites purporting to provide coronavirus information and/or related products are popping up in large numbers around the Net, all with nefarious intents to spread malware, steal your accounts, or rob you in other ways.

Even when VPNs (Virtual Private Networks) are in use, malware on employee personal computers may happily transit VPNs into corporate networks. Commercial VPN services introduce their own risk factors, both due to potential flaws in their implementations and the basic technical limitations inherent in using a third-party service for such purposes. Whenever possible, third-party VPN services are to be avoided by corporate users, and these firms and other organizations using VPNs should deploy “in-house” VPN systems if they truly have the technical expertise to do so safely.

But far better than VPNs are “zero trust” security models such as Google’s “BeyondCorp” (https://cloud.google.com/beyondcorp), that can provide drastically better security without the disadvantages and risks of VPNs.

There are even more basic issues in focus. Most users still refuse to enable 2-factor (aka “2-step”) verification systems (https://www.google.com/landing/2step/) on services that support it, putting them at continuous risk of successful phishing attacks that can result in account hijacking and worse. 

I’ve been writing about all of this for many years here in this blog and in other venues. I’m not going to make a list here of my many relevant posts over time — they’re easy enough to find. 

The bottom line is that the kind of complacency that has been the hallmark of most firms and most users when it comes to computer security is even less acceptable now than ever before. It’s time to grow up, bite the bullet, and expend the effort — which in some cases isn’t a great deal of work at all! — to secure your systems, your data, and yes, your life and the lives of those that you care about.

Stay well.

–Lauren–

08-Feb-20

Views: 73

For years — actually for decades — those of us in the Computer Science community who study election systems have with almost total unanimity warned against the rise of electronic voting, Internet voting, and more recently smartphone/app-based voting systems. I and my colleagues have written and spoken on this topic many times. Has anyone really been listening? Apparently very few!

We have pointed out repeatedly the fundamental problems that render high-tech election systems untrustworthy — much as “backdoors” to strong encryption systems are flawed at foundational levels.

Without a rigorous “paper trail” to backup electronic votes, knowing for sure when an election has been hacked is technically impossible. Even with a paper trail, getting authorities to use it can be enormously challenging. Hacking contests against proposed e-voting systems are generally of little value, since the most dangerous attackers won’t participate in those — they’ll wait for the real elections to do their undetectable damage!

Of course it doesn’t help when the underlying voting models are just this side of insane. Iowa’s caucuses have become a confused mess on every level. Caucuses throughout the U.S. should have been abandoned years ago. They disenfranchise large segments of the voting population who don’t have the ability to spend so much time engaged in a process that can take hours rather than a few minutes to cast their votes. Not only should the Democratic party have eliminated caucuses, it should no longer permit tiny states whose demographics are wholly unrepresentative of the party — and of the country as a whole — to be so early in the primary process. 

In the case of Iowa (and it would have been Nevada too, but they’ve reportedly abandoned plans to use the same flawed app) individual voters weren’t using their smartphones to vote, but caucus locations — almost 1700 of them in Iowa — were supposed to use the app (that melted down) to report their results. And of course the voice phone call system that was designated to be the reporting backup — the way these reports had traditionally been made — collapsed under the strain when the app-based system failed.

Some areas in the U.S. are already experimenting with letting larger and larger numbers of individual voters use their smartphones and apps to vote. It seems so obvious. So simple. They just can’t resist. And they’re driving their elections at 100 miles an hour right toward a massive brick wall.

Imagine — just imagine! — what the reactions would be during a national election if problems like Iowa’s occurred then on a much larger scale, especially given today’s toxic conspiracy theories environment. 

It would be a nuclear dumpster fire of unimaginable proportions. The election results would be tied up in courts for days, weeks, months — who knows?

We can’t take that kind of risk. Or if we do, we’re idiots and deserve the disaster that is likely to result.

Make your choice.

–Lauren–

17-Jan-20

Views: 14

One of the most poignant ironies of the Internet is that at the very time that it’s become increasingly difficult for anyone to conduct their day to day lives without using the Net, some categories of people are increasingly being treated badly by many software designers. The victims of these attitudes include various special needs groups — visually and/or motor impaired are just two examples — but the elderly are a particular target.

Working routinely with extremely elderly persons who are very active Internet users (including in their upper 90s!), I’m particularly sensitive to the difficulties that they face keeping their Net lifelines going. 

Often they’re working on very old computers, without the resources (financial or human) to permit them to upgrade. They may still be running very old, admittedly risky OS versions and old browsers — Windows 7 is going to be used by many for years to come, despite hitting its official “end of life” for updates a few days ago.

Yet these elderly users are increasing dependent on the Net to pay bills (more and more firms are making alternatives increasingly difficult and in some cases expensive), to stay in touch with friends and loved ones, and for many of the other routine purposes for which all of us now routinely depend on these technologies.

This is a difficult state of affairs, to say the least.

But there’s an aspect of this that is even worse. It’s attitudes! It’s the attitudes of many software designers that suggest they apparently really don’t care about this class of users much — or at all.

They design interfaces that are difficult for these users to navigate. Or in extreme cases, they simply drop support for many of these users entirely, by eliminating functionality that permits their old systems and old browsers to function. 

We can certainly stipulate that using old browsers and old operating systems is dangerous. In a perfect world, resources would be available to get everyone out of this situation.

But of course we don’t exist in a perfect world, and these users, who are already often so disadvantaged in so many other ways, need support from software designers, not disdain or benign neglect.

A current example of these users being left behind is the otherwise excellent, open source “Discourse” forum software. I use this software myself, and it’s a wonderful project.

Recently they announced that they would be pulling all support for Internet Explorer (except for limited read-only access) from the Discourse software. Certainly they are not the only site or project dropping support for old browsers, but this fact does not eliminate the dilemma.

I despise Internet Explorer. And yes, old computers running old OS versions and old browsers represent security risks to their users. Definitely. No question about it. But what of the users who don’t understand how to upgrade? Who don’t have anyone to help them upgrade? Are we to tell them that they matter not at all? Is the plan to try ignore them as much as possible until they’re all dead and gone? Newsflash: This category of users will always exist!

This issue rose to the top of my morning queue today when I saw a tweet from Jeff Atwood (@codinghorror). Jeff is the force behind the creation and evolution of Discourse, and was a co-founder of Stack Exchange. He does seriously good work.

Yet this morning we engaged in the following tweet thread:

Jeff: At this point I am literally counting the days until we can fully remove IE11 support in @discourse (June 1st 2020)

Lauren: I remain concerned about the impact this will have on already marginalized users on old systems without the skills or help to switch to other browsers. They have enough problems already!

Jeff: Their systems are so old they become extremely vulnerable to hackers and exploits, which is bad for their health and the public health of everyone else near them. It becomes an anti-vaccination argument, in which nobody wins.

Lauren: Do you regularly work with extremely elderly people whose only lifelines are their old computers? Serious question.

Somewhere around this point, he closed down the dialogue by blocking me on Twitter.

This was of course his choice, but seems a bit sad when I actually had more fruitful discussions of this matter previously on the main Discourse discussion forum itself.

Of course his anti-vaxx comparison is inherently flawed. There are virtually always ways for people who can’t afford important vaccinations to receive them. Not so for upgrading computer hardware, software, or getting help working with those systems, particularly for elderly persons living in isolation.

Yes, the world will keep spinning after Discourse drops IE support.

Far more important though than this particular case is the attitude being expressed by so many in the software community, an attitude that suggests that many highly capable software engineers don’t really appreciate these users and the kinds of problems that many of these users may have, that can prevent them from making even relatively simple changes or upgrades to their systems — which they need to keep using as much as anyone — in the real world. 

And that’s an unnecessary tragedy.

–Lauren–

10-Aug-19

Views: 36

Rumors are circulating widely — and some news sources claim to have seen actual drafts — of a possible Trump administration executive order aimed at giving the government control over content at large social media and other major Internet platforms. 

This effort is based on one of the biggest lies of our age — the continuing claims mostly from the conservative right (but also from some elements of the liberal left) that these firms are using politically biased decisions to determine which content is inappropriate for their platforms. That lie is largely based on the false premise that it’s impossible for employees of these firms to separate their personal political beliefs from content management decisions.

In fact, there is no evidence of political bias in these decisions at these firms. It is completely appropriate for these firms to remove hate speech and related attacks from their platforms — most of which does come from the right (though not exclusively so). Nazis, KKK, and a whole array of racist, antisemitic, anti-Muslim, misogynistic, and other violent hate groups are disproportionately creatures of the political right wing. 

So it is understandable that hate speech and related content takedowns would largely affect the right — because they’re the primary source of these postings and associated materials. 

At the scales that these firms operate, no decision-making ecosystem can be 100% accurate, and so errors will occur. But that does not change the underlying reality that the “political bias” arguments are false. 

The rumored draft Trump executive order would apparently give the FCC and FTC powers to determine if these firms were engaging in “inappropriate censorship” — the primary implied threat appears to be future changes to Section 230 of the Communications Decency Act, which broadly protects these (and other) firms and individuals from liability for materials that other parties post to their sites. In fact, 230 is effectively what makes social media possible in the first place, since without it the liability risks of allowing users to post anything publicly would almost certainly be overwhelming. 

But wait, it gets worse!

At the same time that these political forces are making the false claims that content is taken down inappropriately from these sites for political purposes, governments and politicians are also demanding — especially in the wake of recent mass shootings — that these firms immediately take down an array of violent postings and similar content. The reality that (for example) such materials may be posted only minutes before shootings occur, and may be widely re-uploaded by other users in an array of formats after the fact, doesn’t faze the politicians and others making these demands, who apparently either don’t understand the enormous scale on which these firms operate, or simply don’t care about such truths when they get in the way of politicians’ political pandering.

The upshot of all this is an insane situation — demands that offending material be taken down almost instantly, but also demands that no material be taken down inappropriately. Even with the best of AI algorithms and a vast human monitoring workforce, these dual demands are in fundamental conflict. Individually, neither are practical. Taken together, they are utterly impossible.

Of course, we know what’s actually going on. Many politicians on both the right and left are desperate to micromanage the Net, to control it for their own political and personal purposes. For them, it’s not actually about protecting users, it’s mostly about protecting themselves. 

Here in the U.S., the First Amendment guarantees that any efforts like Trump’s will trigger an orgy of court battles. For Trump himself, this probably doesn’t matter too much — he likely doesn’t really care how these battles turn out, so long as he’s managed to score points with his base along the way. 

But the broader risks of such strategies attacking the Internet are enormously dangerous, and Republicans who might smile today about such efforts would do well to imagine similar powers in the hands of a future Democratic administration. 

Such governmental powers over Internet content are far too dangerous to be permitted to any administration of any party. They are anathema to the very principles that make the Internet great, and they must not be permitted to take root under any circumstances.

–Lauren–

30-Jul-19

Views: 16

Another day, another massive data breach. This time some 100 million people in the U.S., and more millions in Canada. Reportedly the criminal hacker gained access to data stored on Amazon’s AWS systems. The fault was apparently not with AWS, but with a misconfigured firewall associated with a Capital One app, the bank whose customers were the victims of this attack.

Firewalls can be notoriously and fiendishly difficult to configure correctly, and often present a target-rich environment for successful attacks. The thing is, firewall vulnerabilities are not headline news — they’re an old story, and better solutions to providing network security already exist.

In particular, Google’s “BeyondCorp” approach (https://cloud.google.com/beyondcorp) is something that every enterprise involved in computing should make itself familiar with. Right now!

BeyondCorp techniques are how Google protects its own internal networks and systems from attack, with enormous success. In a nutshell, BeyondCorp is a set of practices that effectively puts “zero trust” in the networks themselves, moving access control and other authentication elements to individual devices and users. This eliminates the need for traditional firewalls (and in most instances, VPNs) because there is no longer a conventional firewall which, once breached, gives an attacker access to all the goodies.

If Capital One had been following BeyondCorp principles, there would be 100+ million less of their customers who wouldn’t be in a panic today.

–Lauren–

 

 

 

 

06-Jul-19
Earthquakes vs. Darth Vader [ 06-Jul-19 4:25pm ]

Views: 27

When the Ridgecrest earthquake reached L.A. yesterday evening (no damage this far from the epicenter from that quake or the one the previous day) I was “in” a moving elevator under attack in the “Vader Immortal” Oculus Quest VR simulation. I didn’t realize that there was a quake at all, everything seemed part of the VR experience (haptic feedback in the hand controllers was already buzzing my arms at the time).

The only oddity was that I heard a strange clinking sound, that at the time had no obvious source but that I figured was somehow part of the simulation. Actually, it was probably the sound of ceiling fan knob chains above me hitting the glass light bulb fixtures as the fan was presumably swaying a bit.

Quakes of this sort are actually very easy to miss if you’re not sitting or standing quietly (I barely felt the one the previous day and wasn’t immediately sure that it was a quake), but I did find my experience last night to be rather amusing in retrospect.

By the way, “Vader Immortal” — and the Quest itself — are very, very cool, very much 21st century “sci-fi” tech finally realized. My thanks to Oculus for sending me a Quest for my experiments.

–Lauren–

03-Jun-19
YouTube's Public Videos Dilemma [ 03-Jun-19 11:11pm ]

Views: 19

So there’s yet another controversy surrounding YouTube and videos that include young children — this time concerns about YouTube suggesting such videos to “presumed” pedophiles.

We can argue about what YouTube should or should not be recommending to any given user. There are some calls for YT to not recommend such videos when it detects them (an imperfect process) — though I’m not convinced that this would really make much difference so long as the videos themselves are public.

But here’s a more fundamental question:

Why the hell are parents uploading videos of young children publicly to YouTube in the first place?

This is of course a subset of a more general issue — parents who apparently can’t resist posting all manner of photos and other personal information about their children in public online forums, much of which is going to be at the very least intensely embarrassing to those children when they’re older. And the Internet rarely ever forgets anything that was ever public (the protestations of EU politicians and regulators notwithstanding).

There are really only two major possibilities concerning such video uploads. Either the parents don’t care about these issues, or they don’t understand them. Or perhaps both.

Various display apps and web pages exist that will automatically display YT videos that have few or no current views from around the world. There’s an endless stream of these. Thousands. Millions? Typically these seem as if they have been automatically uploaded by various camera and video apps, possibly without any specific intentions for the uploading to occur. Many of these involve schools and children.

So a possible answer to my question above may be that many YT users — including parents of young children — are either not fully aware of what they are uploading, or do not realize that the uploads are public and are subject to being suggested to strangers or found by searching. 

This leads us to another question. YT channel owners already have the ability to set their channel default privacy settings and the privacy settings for each individual video. 

Currently those YT defaults are initially set to public.

Should YT’s defaults be private rather than public?

Looking at it from a user trust and safety standpoint, we may be approaching such a necessity, especially given the pressure for increased regulatory oversight from politicians and governments, which in my opinion is best avoided if at all possible.

These questions and their ramifications are complex to say the least.

Clearly, default channel and videos privacy would be the safest approach, ensuing that videos would typically only be shared to specific other users deemed suitable by the channel owner. 

All of the public sharing capabilities of YT would still be present, but would require the owner to make specific decisions about the channel default and/or individual video settings. If a channel owner wanted to make some or all of their videos public — either to date or also going forward, that would be their choice. Full channel and individual videos privacy would only be the original defaults, purely as a safety measure.

Finer-grained settings might also be possible, not only including existing options like “unlisted” videos, but also specific options to control the visibility of videos and channels in search and suggestions.

Some of the complexities of such an approach are obvious. More controls means the potential for more user confusion. Fewer videos in search and suggestions limits visibility and could impact YT revenue streams to both Google and channel owners in complex ways that may be difficult to predict with significant accuracy.

But in the end, the last question here seems to be a relatively simple one. Should any YouTube uploaders ever have their videos publicly available for viewing, search, or suggestions if that was not actually their specific and informed intent?

I believe that the answer to that question is no.

Be seeing you.

–Lauren–

02-May-19

Views: 14

Almost exactly two years ago, I noted here the comprehensive features that Google provides for users to access their Google-related activity data, and to control and/or delete it in a variety of ways. Please see:

The Google Page That Google Haters Don't Want You to Know About – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about

and:

Quick Tutorial: Deleting Your Data Using Google's “My Activity” – https://lauren.vortex.com/2017/04/24/quick-tutorial-deleting-your-data-using-googles-my-activity

Today Google announced a new feature that I’ve long been hoping for — the option to automatically delete these kinds of data after specific periods of time have elapsed (3 month and 18 month options). And of course, you still have the ability to use the longstanding manual features for control and deletion of such data whenever you desire, as described at the links mentioned above.

The new auto-delete feature will be deployed over coming weeks first to Location History and to Web & App Activity.

This is really quite excellent. It means that you can take advantage of the customization and other capabilities that are made possible by leaving data collection enabled, but if you’re concerned about longer term storage of that data, you’ll be able to activate auto-delete and really get the best of both worlds without needing to manually delete data yourself at intervals.

Auto-delete is a major privacy-positive milestone for Google, and is a model that other firms should follow. 

My kudos to the Google teams involved!

–Lauren–

29-Apr-19

Views: 19

Could machine learning/AI techniques help to prevent mass shootings or other kinds of terrorist attacks? That’s the question. I do not profess to know the answer — but it’s a question that as a society we must seriously consider.

A notable relatively recent attribute of many mass attacks is that the criminal perpetrators don’t only want to kill, they want as large an audience as possible for their murderous activities, frequently planning their attacks openly on the Internet, even announcing online the initiation of their killing sprees and providing live video streams as well. Sometimes they use private forums for this purpose, but public forums seem to be even more popular in this context, given their potential for capturing larger audiences.

It’s particularly noteworthy that in some of these cases, members of the public were indeed aware of such attack planning and announcements due to those public postings, but chose not to report them. The reasons for the lack of reporting can be several. Users may be unsure whether or not the posts are serious, and don’t want to report someone for a fake attack scenario. Other users may want to report but not know where to report such a situation. And there may be other users who are actually urging the perpetrator onward to the maximum possible violence.

“Freedom of speech” and some privacy protections are generally viewed as ending where credible threats begin. Particularly in the context of public postings, this suggests that detecting these kinds of attacks before they have actually occurred may possibly be viewed as a kind of “big data” problem.

We can relatively easily list some of the factors that would need to be considered in these respects.

What level of resources would be required to keep an “automated” watch on at least the public postings and sites most likely to harbor the kinds of discussions and “attack manifestos” of concern? Could tools be developed to help separate false positive, faked, forged, or other “fantasy” attack postings from the genuine ones? How would these be tracked over time to include other sites involved in these operations, and to prevent “gaming” of the systems that might attempt to divert these tools away from genuine attack planning?

Clearly — as in many AI-related areas — automated systems alone would not be adequate by themselves to trigger full-scale alarms. These systems would primarily act as big filters, and would pass along to human teams their perceived alerts — with those teams making final determinations as to dispositions and possible referrals to law enforcement for investigatory or immediate preventative actions.

It can be reasonably argued that anyone publicly posting the kinds of specific planning materials that have been discovered in the wake of recent attacks has effectively surrendered various “rights” to privacy that might ordinarily be in force.

The fact that we keep discovering these kinds of directly related discussions and threats publicly online in the wake of these terrorist attacks, suggests that we are not effectively using the public information that is already available toward stopping these attacks before they actually occur.

To the extent that AI/machine learning technologies — in concert with human analysis and decision-making — may possibly provide a means to improve this situation, we should certainly at least be exploring the practical possibilities and associated issues.

–Lauren–

02-Apr-19

Views: 13

Despite my very long history of enjoying “apocalyptic” and “technology run amok” sci-fi films, I’ve been forthright in my personal belief that AI and associated machine learning systems hold enormous promise for the betterment of our lives and our planet (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all).

Of course there are definitely ways that we could screw this up. So deep discussion from a wide variety of viewpoints is critical to “accentuate the positive — eliminate the negative” (as the old Bing Crosby song lyrics suggest).

A time-tested model for firms needing to deal with these kinds of complex situations is the appointment of external interdisciplinary advisory panels. 

Google announced its own such panel — the “Advanced Technology External Advisory Council” (ATEAC), last week. 

Controversy immediately erupted both inside and outside of Google, particularly relating to the presence of prominent right-wing think tank Heritage Foundation president Kay Cole James. Another invited member — behavioral economist and privacy researcher Alessandro Acquisti — has now pulled out from ATEAC, apparently due to James’ presence on the panel and the resulting protests.

This is all extraordinarily worrisome. 

While I abhor the sentiments of the Heritage Foundation, an AI advisory panel composed only of “yes men” in agreement more left-wing (and so admittedly my own) philosophies regarding social issues strikes me as vastly more dangerous.

Keeping in mind that advisory panels typically do not make policy — they only make recommendations — it is critical to have a wide range of input to these panels, including views with which we may personally strongly disagree, but that — like it or not — significant numbers of politicians and voters do enthusiastically agree with. The man sitting in the Oval Office right now is demonstrable proof that such views — however much we may despise them personally — are most definitely in the equation.

“Filter bubbles” are extraordinarily dangerous on both the right and left. One of the reasons why I so frequently speak on national talk radio — whose audiences are typically very much skewed to the right — is that I view this as an opportunity to speak truth (as I see it) regarding technology issues to listeners who are not often exposed to views like mine from the other commentators that they typically see and hear. And frequently, I afterwards receive emails saying “Thanks for explaining this like you did — I never heard it explained that way before” — making it all worthwhile as far as I’m concerned.

Not attempting to include a wide variety of viewpoints on a panel dealing with a subject as important as AI would not only give the appearance of “stacking the deck” to favor preconceived outcomes, but would in fact be doing exactly that, opening up the firms involved to attacks by haters and pandering politicians who would just love to impose draconian regulatory regimes for their own benefits. 

The presence on an advisory panel of someone with whom other members may dramatically disagree does not imply endorsement of that individual.

I want to know what people who disagree with me are thinking. I want to hear from them. There’s an old saying: “Keep your friends close and your enemies closer.” Ignoring that adage is beyond foolish.

We can certainly argue regarding the specific current appointments to ATEAC, but viewing an advisory panel like this as some sort of rubber stamp for our preexisting opinions would be nothing less than mental malpractice. 

AI is far too crucial to all of our futures for us to fall into that sort of intellectual trap.

–Lauren–

 
News Feeds

Environment
Blog | Carbon Commentary
Carbon Brief
Cassandra's legacy
CleanTechnica
Climate and Economy
Climate Change - Medium
Climate Denial Crock of the Week
Collapse 2050
Collapse of Civilization
Collapse of Industrial Civilization
connEVted
DeSmogBlog
Do the Math
Environment + Energy – The Conversation
Environment news, comment and analysis from the Guardian | theguardian.com
George Monbiot | The Guardian
HotWhopper
how to save the world
kevinanderson.info
Latest Items from TreeHugger
Nature Bats Last
Our Finite World
Peak Energy & Resources, Climate Change, and the Preservation of Knowledge
Ration The Future
resilience
The Archdruid Report
The Breakthrough Institute Full Site RSS
THE CLUB OF ROME (www.clubofrome.org)
Watching the World Go Bye

Health
Coronavirus (COVID-19) – UK Health Security Agency
Health & wellbeing | The Guardian
Seeing The Forest for the Trees: Covid Weekly Update

Motorcycles & Bicycles
Bicycle Design
Bike EXIF
Crash.Net British Superbikes Newsfeed
Crash.Net MotoGP Newsfeed
Crash.Net World Superbikes Newsfeed
Cycle EXIF Update
Electric Race News
electricmotorcycles.news
MotoMatters
Planet Japan Blog
Race19
Roadracingworld.com
rohorn
The Bus Stops Here: A Safer Oxford Street for Everyone
WORLDSBK.COM | NEWS

Music
A Strangely Isolated Place
An Idiot's Guide to Dreaming
Blackdown
blissblog
Caught by the River
Drowned In Sound // Feed
Dummy Magazine
Energy Flash
Features and Columns - Pitchfork
GORILLA VS. BEAR
hawgblawg
Headphone Commute
History is made at night
Include Me Out
INVERTED AUDIO
leaving earth
Music For Beings
Musings of a socialist Japanologist
OOUKFunkyOO
PANTHEON
RETROMANIA
ReynoldsRetro
Rouge's Foam
self-titled
Soundspace
THE FANTASTIC HOPE
The Quietus | All Articles
The Wire: News
Uploads by OOUKFunkyOO

News
Engadget RSS Feed
Slashdot
Techdirt.
The Canary
The Intercept
The Next Web
The Register

Weblogs
...and what will be left of them?
32767
A List Apart: The Full Feed
ART WHORE
As Easy As Riding A Bike
Bike Shed Motorcycle Club - Features
Bikini State
BlackPlayer
Boing Boing
booktwo.org
BruceS
Bylines Network Gazette
Charlie's Diary
Chocablog
Cocktails | The Guardian
Cool Tools
Craig Murray
CTC - the national cycling charity
diamond geezer
Doc Searls Weblog
East Anglia Bylines
faces on posters too many choices
Freedom to Tinker
How to Survive the Broligarchy
i b i k e l o n d o n
inessential.com
Innovation Cloud
Interconnected
Island of Terror
IT
Joi Ito's Web
Lauren Weinstein's Blog
Lighthouse
London Cycling Campaign
MAKE
Mondo 2000
mystic bourgeoisie
New Humanist Articles and Posts
No Moods, Ads or Cutesy Fucking Icons (Re-reloaded)
Overweening Generalist
Paleofuture
PUNCH
Putting the life back in science fiction
Radar
RAWIllumination.net
renstravelmusings
Rudy's Blog
Scarfolk Council
Scripting News
Smart Mobs
Spelling Mistakes Cost Lives
Spitalfields Life
Stories by Bruce Sterling on Medium
TechCrunch
Terence Eden's Blog
The Early Days of a Better Nation
the hauntological society
The Long Now Blog
The New Aesthetic
The Public Domain Review
The Spirits
Two-Bit History
up close and personal
wilsonbrothers.co.uk
Wolf in Living Room
xkcd.com