Lauren Weinstein's Blog fetch started: Lauren Weinstein's Blog updated:
Description:
Web: https://lauren.vortex.com/
XML: https://lauren.vortex.com/feed
Last Fetch: 20-Feb-26 6:56am
Category: Weblogs
Active: Yes
Failures: 0
Refresh: 240 minutes
Expire: 4 weeks

Fetch now | Edit | Empty | Delete
All the news that fits
03-Feb-26
Lauren Weinstein's Blog [ 3-Feb-26 4:07pm ]

Various AI firms have launched so-called "AI browsers" and in particular what are called "agentic AI" browser features. And now Google has announced they've made massive AI upgrades to their Chrome browser which is by far the most used Web browser on this planet. And these Google Gemini AI features are becoming available to different classes of users paying or not paying over time, so you may not see some of them yet but you can feel pretty confident that eventually you will.

Frankly, I don't recommend voluntarily using ANY current generative AI products from any firms. Google is indeed trying to push their Gemini AI into everything. But right now I want to warn in particular about what Google is calling Chrome "Auto Browse". This is Google's Gemini "agentic" AI system. And I'll cut right to the chase at the start here, my very strong recommendation is, even more so than with other AI features, that you do not enable Auto Browse, do not use it, do not touch it. And I have the same advice for any other agentic AI systems from other firms.

What these systems do is in various ways take over your Web browsing. The AI literally masquerades as you, using your accounts and other credentials, and clicks its way around the Web to perform actions that normally you would do yourself. The concept is that in theory you could just tell the AI to find the best deal for something on the Web or book your vacation or clean up your duplicate photos or whatever, and the AI agent would run around and do all this for you.

I'm sure you already see why this has so many experts concerned, because we all know how AI systems spout misinformation and get confused, often can be manipulated in nefarious ways by hidden prompts on their inputs and so on. A three year old has more common sense than AI, because these AI systems have NO common sense. And we've already seen stories of people devastated when using these agentic AI systems when the AI deleted all their files or took other just awful actions.

Now, here's the REALLY important part. It might be assumed that if these systems make terrible mistakes on your behalf using your accounts and credentials, that the AI firms would take responsibility. Well, think again. Google for example with their new Chrome Auto Browse pops a warning saying explicitly that actions taken by their AI on your behalf are YOUR responsibility. If the AI screws up, YOU get the shaft.

That's the WHOLE ballgame as far as I'm concerned, and why I don't recommend using agentic AI at all. These systems typically have settings that again in theory are supposed to let you control what sorts of actions they take, what files of yours they have access to and other parameters. Google's for example at this point reportedly is supposed to stop just short of letting the AI click the final BUY NOW button creating a charge on your accounts. And of course they say you should monitor the AI's actions.

This is all basically hogwash. Google must know that most people do not have the background or time to keep track of how these AIs are configured or what they're actually doing, and if you have to monitor the AI to see if it's messing up, much of the whole ostensible purpose is lost from the get go.

There's a lot more technical detail of course. For example, your private browsing activities may be uploaded to Google as part of all this, triggering an array of additional privacy issues.

But as far as I'm concerned, this is a very straightforward decision. Even if Google for example were willing to accept responsibility for errors that Auto Browse makes that could potentially cause enormous problems for users — and AGAIN they're refusing to accept that responsibility — I would not ever want these AI agents performing actions on my behalf — I won't be using them.

If you're willing to let these hallucinating Large Language AI models loose on your phone or desktop computer and let them go merrily clicking around the Web using your accounts and credentials, that's your choice of course, but being a guinea pig for Big Tech AI isn't anywhere on my personal bucket list.

-Lauren-

29-Jan-26

Proposed legislation in Washington State would attempt to ban the use of 3D printers or CNC machines from being used to create guns or gun parts, likely expanding to other items that would be banned later. They also want to somehow require "blocking systems" to technologically prevent these devices from being able to create such items. This concept has been proposed in other venues as well.

Ostensibly all of this is to push back against the creation of so-called untraceable "ghost guns". Over the last few years 3D printers have evolved from finicky devices requiring quite a bit of expertise to use, into more of consumer products that still need considerable knowledge to use at their best, but that generally are much simpler for non-experts to use. 3D printers work with plastic. Less familiar especially to hobbyists are CNC equipment, that's Computer Numerical Control — that can also work with plastic but more commonly are used to fashion metal or wood.

Here's a key reality: These machines themselves don't know what they're creating, other than some that display the shape of the objects. These objects can vary enormously and can be in virtually infinite numbers of specific forms, and could typically be used for all sorts of assemblies having nothing to do with guns. 3D printers and CNC equipment are literally robots following a long list of specific instructions — move this far in X direction, this far in Y, this distance in Z. Extrude this much plastic. And so on.

They generally don't even need Internet connections. They can follow a long list of these precise instructions in what's called g-code (which stands for "geometric code"), even if presented on a simple microSD card. And by the way, g-code was invented in the 1950s at MIT! It's been augmented over the years of course.

What creates the g-code? In the case of 3D printers, typically g-code comes from software generically referred to as slicers. CNC gear uses similar software to generate their g-code. Slicers input the data from CAD — Computer Aided Design — files often as what are called STL files, and processes these to create the specific lists of g-code instructions.

While there are some versions of all this that are proprietary, crucially all of these various elements in this engineering pipeline can be implemented using easily available parts and open source software. So it becomes obvious why so-called "blocking" technologies would be impractical at scale against anyone with the desire to ignore them.

Guns can be created using parts from a hardware store — 3D printers or CNC machines aren't necessary. Remember, the equipment itself doesn't know if it's creating a component for a gun or a similar looking object for a harmless school engineering assignment having nothing to do with firearms. Should screwdrivers be banned because they can be used to create weapons? Of course not.

I could go on but frankly the concept of requiring "blocking" technology in 3D printers and CNC machines isn't even a close call in terms of technological reality. It wouldn't accomplish its stated purpose, but it could cause enormous problems in a vast array of ways since these tools are used by factories, businesses, educators, farmers, hobbyists, and many others who are doing nothing related to firearms at all, but would find their work constantly hobbled by such government edicts and attempts to implement them.

The blocking concept for 3D printers and CNC equipment is somewhat akin to wishful thinking. It's not practical, and it should absolutely be rejected.

-Lauren-

An experiment in AI coding with Google Gemini. I try to be fair. When I call generative AI mostly slop, I don't do so blindly; I attempt to conduct reasonable tests in various contexts.

Yesterday I needed a couple of routines — one in Bash, the other in Python. I tried the Python one first. This required code to asynchronously access a remote site API, authenticate, send and receive various data and process what was returned, relying on a well documented Python library on GitHub written specifically to deal with that site's API.

After almost two hours, I gave up. Gemini was consistently cheerful and cooperative — almost to a creepy extent. It generated code that looked reasonable, was very well commented, and even provided helpful examples of how to configure, install, and run the code.

Unfortunately, none of it actually worked.

When I noted the problems, Gemini got oddly enthusiastic, with comments like "Wow, that's a great explanation of the problems, and a very useful error message! Let's figure out what's wrong! Here is another version with more diagnostics that accesses the library more directly!"

Sort of made me feel like I was dealing with an earnest but incompetent TA at an undergraduate CS course at UCLA long ago. Which was not something I enjoyed back then!

After a bunch of iterations, I gave up. Even starting over didn't help. Gemini never seemed to produce the same code twice, no matter how I worded the prompts. The code would use completely different models each time, sometimes embedded configuration values, sometimes external files, sometimes command line args. And the way it tried to use the Python library in question also varied enormously. It almost seemed random. Or at least pseudorandom.

I spent half an hour and wrote plus tested the code I needed from scratch. It worked on the second try, and was about half the number of lines of any of the code Gemini generated, and much simpler, for whatever that's worth. By comparison, Gemini's code was bloated and definitely unnecessarily complex (as well as wrong).

I did give Gemini another chance. I also needed a simple Bash script to do some date conversions. I offered that task to Gemini since I didn't want to bother digging through the various date format parameters required. Gemini came up with something reasonable for this in about four tries. Whether it's completely bug free I dunno for sure, I haven't dug into the code deeply since its not a critical application. But it seems to be working for now.

So really, I haven't seen a significant improvement in this area. There are probably some reasonable sets of problems where AI-coding can reduce some of the grunt work, but once you get into anything more complex the opportunities for errors, especially in larger chunks of code where detecting those errors might not be straightforward, seem to rise dramatically.

-Lauren-

Drone leader Chinese company DJI this morning announced a new drone in their economical but powerful "mini" drone lineup, the "Mini 5 Pro". Speculation and rumors about this product have been circulating online for many months, and on Monday the world got to see a short video tease telling that today at 8 am Eastern Time there would be an announcement. That is, unless you were in the USA, since accessing the DJI site from here, unlike the rest of the world, would not show that tease.

Because once again the administration and our bipartisan tech morons of Congress have forced DJI into the position of not officially making the new drone available here at all, as happened with a larger, more expensive new drone in their lineup recently.

The confluence of the insane self-imposed injuries of oppressive tariffs that must be paid by American consumers, bizarre customs blockages, and legislative and executive branch actions to target Chinese drone makers in general and DJI in particular, are turning us into a laughingstock when it comes to this important tech.

Law enforcement, search and rescue, and other public safety organizations depend on these (normally) easily accessible drones, and it is likely that lives and property will be lost by their being cut out from this new drone, which brings for the first time a larger one inch camera sensor, and LIDAR for advanced obstacle avoidance and the ability to "return to home" in many well-lit environments even if GPS satellite signals are lost.

The U.S.'s claims of security concerns regarding DJI drones have never to been shown to be reality. The rest of the world, including U.S. allies, apparently don't have these fake concerns — so their citizens will get the benefit of this new drone, while we're at the mercy of ever more bipartisan political stupidity.

The Mini 5 Pro actually is authorized to be flown in the U.S. and will likely will be the last DJI drone for the foreseeable future to receive such approval, given the various restrictions that I've previously discussed in detail already in the pipeline, including one triggered at the end of this year, absent drastic last minute changes.

Are there any ways for U.S. residents to get hold of the Mini 5 Pro? Since DJI is not officially releasing the product in the U.S., that leaves the gray market (domestic and international) with all the associated risks associated with potentially drastically elevated pricing, lack of manufacturer warranties, and possible problems ever getting repairs. Some U.S. residents are reportedly already planning trips to Canada or Mexico to obtain these drones and then deal with our nightmare borders import mess to bring them back into the U.S.

Frankly, our country is being humiliated by so many pathetic leaders, especially when it comes to tech. And this is but the latest example.

-Lauren-

Calls for Google's Chrome Browser to be separated from Google could potentially result in a privacy and security disaster for literally billions of people around this planet.

An AI firm just offered 34.5 billion dollars (about twice what that company is theoretically worth) for Google's Chrome browser, and then almost immediately another AI firm offered a full 35 billion — what's 500 million dollars among friends, right?

Of course, there's no obvious indication that Google has any interest in selling off Chrome at this time. Another factor is that there's speculation that the judge in an antitrust case that Google lost might order that Google divest itself of Chrome as part of a penalty, though that case is very likely to be appealed and go through considerably more litigation so we don't really know where that case will end up.

But the question you gotta ask yourself is WHY these firms would be willing to pay so much for Chrome. Yes, Chrome has about three and a half billion users who consider it to be their primary browser, and around a two-thirds global market share among the various browsers that users can choose from. And you're still talking about paying about $10 per user to get up to a $35 billion dollar offer. But the thing is, Chrome is effectively open source. These firms could essentially get the browser sources for free. The Google Chrome browser is based on the Chromium open source project, and that's the origin not only for the Chrome browser but also various other browsers. In fact, Microsoft's Windows Edge browser, that they're constantly trying to manipulate Windows users into switching to from other browsers, is itself based on the Chromium project.

So again, why are these AI firms willing to pay such an enormous sum for Chrome? And the answer is, they probably don't really care about Chrome per se, they care about those three and half billion users who use Chrome and could be dragged with Chrome over into these other firms' "AI First" philosophies, perhaps along with their browser histories and all the other data associated with routine Web use. So it's not the browser they lust after, it's the people who use the browser.

Now, as we've noted frequently, Google itself is going "full speed ahead" into AI whether users want it not — and mostly it seems they don't want it. But that said, Google still has an excellent history of protecting and securing user data and privacy, related to their Chrome browser's use and other associated applications. This includes many routine Google and other services and for example the Chromebooks that are so popular in education and industry.

The thought of any of that data being handed over to some external entity or entities outside Google is of great concern to many observers in the security and privacy fields. What will happen if those 3.5 billion Chrome users are sucked into those other firms' AI fever dreams that again many — polls say by FAR most — users don't want to have anything to do with at all!

Yes, many people are critical of Google. But there's an old saying that the devil you know is better than the devil you don't know. And yes, I myself have been quite critical of various of Google's policy decisions, especially related to their Large Language Model generative AI push of late. But I've had, and still have, a great deal of respect and trust in terms of how the regular employees inside Google — the Googlers many of whom I've known — work to protect our data and our related privacy.

The upshot of all this is that billions of people conduct their Internet usage through the Chrome browser, and it's difficult to see how handing that browser — and those users — over to another firm doesn't stand a high probability of creating new privacy and security risks for those users who already have enough Internet problems to worry about.

-Lauren-

06-Aug-25

There is currently what amounts to a “war” between the U.S. federal and state governments against specific Chinese drone makers, with the big target being DJI. And a major issue has been what would happen if the many organizations — law enforcement, search and rescue, other public safety, farmers, utilities, on and on — couldn’t continue to obtain or use the DJI drones in particular that they have depended on for years.

And the discussion has been largely theoretical for most of this period because DJI drones, repairs, parts, and service have continued to be available. But now that’s changing and moving beyond the theoretical and into real world effects, and yeah the situation is deteriorating even faster than even most pessimistic observers anticipated.

I’m not going to try review here all the deep details of how we got to this point, except to note that there are multiple aspects. Confusion over rapidly changing tariff rates is one factor. There have been claims that DJI drones have, or maybe in the future could have security issues, though this has never been demonstrated — apparently DJI has passed every security audit conducted on their products.

Many observers have long suspected that what’s really going on is politically-motivated protectionism from politicians in both parties, because the organizations that buy DJI drones apparently consider them to be more affordable, reliable, and rapid to obtain compared with currently available U.S. made alternatives. And remember we’re not talking just about little DJI drones you can hold on your hand, they also have very large drones that farmers use to spray crops, and big drones that can lower or gather heavy payloads in rescue situations in isolated, rugged areas and so on.

But now, with this confluence of factors, including U.S. Customs reportedly pretty much choking off the supply of DJI products into the U.S., we’ve reached a point where the rest of the world can buy these advanced DJI drones, including new ones just recently released and others likely to be very soon released, but the U.S. is cut off. The supply of DJI products has dried up in the U.S. Out of stock virtually everywhere. Repairs are reportedly taking longer, and parts are difficult or impossible to obtain.

DJI is still trying to get a government agency to do the security review mandated by the National Defense Authorization Act as passed by Congress, and the deadline that would trigger an associated DJI drone ban is at the end of this year. The whole situation is completely nuts.

In Florida, the state government ordered official usage of DJI drones stopped. That means grounding 200 million taxpayer dollars of drones used for police work, fire fighting, mosquito control and more. And the state is apparently only willing to provide a tenth of that much to replace them with U.S. made drones that are typically many times more expensive than DJI drones, and sometimes take months rather than days to obtain.

In some states 90% of public safety drones are DJI. Their drones are known to be exceptionally reliable. An Orlando police department indicated that they had five failures of “approved” U.S. made drones over a year and half, but no failures among the DJI drones they’d been using.

We could keep going through the statistics and more of these cases but you get the idea. We all want a strong domestic drone industry, but agencies and other groups who rely on DJI drones in the U.S. are being cut off from vital technology that the rest of the world can still easily obtain. There haven’t been publicly demonstrated security problems with DJI drones despite the alarmist hype from the politicians.

This entire mess does appear to be politically driven and BOTH parties are to blame. These politicians need to stop this craziness, because they’re not just putting important U.S. businesses and other organizations at risk with this drone ban nonsense, they’ll be putting U.S. lives at risk as well. That’s irresponsible and it really needs to stop, RIGHT NOW!

–Lauren–

30-Jul-25

We all want to prevent children from being harmed on the Internet, but exactly how to do this without creating even more problems for them and for adults has turned into quite a complicated and political situation.

There have been broad concerns that various website age verification systems could be privacy invasive, ineffective, and in some cases actually might cause even more harm to children than not having the verifications there in the first place. And now with more and more of these systems appearing — the Supreme Court just declared them legal for states to require for commercial porn sites — we’re starting to see various of these predictions coming true.

Remember that age verification systems — whether for porn sites, or social media sites, or pretty much any site like the situation in China where virtually all Internet usage can be tracked by the government — doesn’t only affect children and teens. No matter your age, you have to prove you’re an adult for access. And that opens up tracking possibilities that many politicians in both parties would love to have here in the U.S, with various state and federal legislation already in place or in litigation. And this quickly creates a situation where your basic privacy involving what sites you visit, what topics you research, what videos or podcasts you view or listen to, on and on, may be seriously compromised in ways never possible before now.

There have already been breaches of age verification systems that publicly exposed users’ identity credentials, a treasure trove for crooks. We can reasonably expect directed hacking attacks at these systems as they expand, and if history is any guide many will be successful. Some of these systems use government credentials, some require credit cards, some are using systems to estimate your age from your face, or by how long you’ve been using a particular email address, and so on.

Many adults who don’t want to hand over a credit card or their driver’s license — and their privacy — to these firms have already found various bypass mechanisms, and it appears that — as expected — kids are already WAY AHEAD of adults at this.

A broad age verification law just took affect in the UK a handful of days ago and is already being widely breached, with it trivially easy to find public discussions with users trading bypass hints and tricks. The degree to which these systems are political theater is emphasized by rules that for example order sites not to tell users that they could use VPNs to bypass the checks in many cases — as if VPNs haven’t been used to bypass geographic restrictions for many years — and most age verification systems are geographically based.

But it actually gets even more bizarre. Some of these age verification systems do indeed try to estimate your age from your face as seen on your camera. Of course if you don’t have a camera on your device or don’t want your face absorbed by these systems you’re out of luck in this respect. For that new UK age verification system, kids very quickly realized they could use a video game that generates very realistic faces to bypass the age verification system. And of course as the nightmarishly advanced AI-based video generation systems continue to evolve — we know where this is headed.

The worst part about all this is that age verification systems broadly applied as some politicians desire, not only have the potential to cut children off from the ability to access crucial information about their own health and safety in cases of abuse, but could actually drive children to all manner of disreputable sites — the kind that can pop up and vanish quickly — that could potentially do them real harm but will never abide by age verification rules.

Age verification seems like an obvious solution to a range of Internet-related problems. But the reality is that many observers feel that it creates more problems than it solves, creating new hacking opportunities and privacy risks, and that in many cases the kids will find ways to bypass it anyway. When trying to fix a complicated problem on the Internet, or anywhere else, the first step probably should be, “Try not to make things even worse.” An idea worth keeping in mind.

–Lauren–

20-May-25

For many, many years, the U.S. has been considered to be the world’s technology and science research leader in all sorts of areas of science — medicine and computer science broadly are just two — it’s a long, long list. But increasingly observers are seeing signs that this is changing in a negative way and for a number of reasons, with potentially dramatic impacts on consumers and businesses.

One obvious issue is that so much tech spending is being poured into these often awful large language model generative AI systems including chatbots and the rest. And we’re not talking about all AI. There are many wonderful applications for AI machine learning in scanning medical test results and storm predictions, and a vast number more. But that kind of AI often gets confused with generative AI like the chatbots and the other generative AI applications that Big Tech is desperately trying to force us to use.

Because it seems that most people really aren’t interested in using that kind of AI and Big Tech has been pouring many billions of dollars into it, and is desperate to find a way to profit from it.

So it seems like other kinds of important research are often being left behind because generative AI has so much hype associated with it that it’s sucking up funding that could otherwise be used in research that could actually help people far more effectively. Much generative AI has become an excuse for firing your best workers and as automated cheating machines to drive teachers crazy.

Also we know there have been major cutbacks in a range of U.S. research efforts like the NSF — National Science Foundation — and major university research programs that have long been considered to be the crown jewels of the U.S.’ global science leadership in everything from medical research like cancer and Alzheimer and other disease research, to pretty much any other crucial science aspect one can name.

It’s certainly possible to find waste in some studies but when you suddenly cut funding to NSF by more than 50% there’s really no way you’re not going to cut into essential work and sometimes disrupt long term studies that might have brought real breakthroughs.

And a side effect of this is the brain drain. Other countries are using this situation to entice some of our best researchers to move their work to those countries. Then in many cases those countries will then have the commercial advantage of breakthroughs that otherwise could have been ours here, and that could mean the loss of billions and billions of dollars.

And something else just happened maybe for the first time ever that I can recall offhand. A major tech firm from another country — in this case DJI of China — who makes the drones widely used by law enforcement, search and rescue and other public safety organizations, farmers, utilities, and consumers — just introduced a new flagship drone that could be very useful in many ways including saving lives.

But DJI has decided not to market it in the U.S. at this time, and you really can’t blame them. The rest of the world where DJI does business can get this new drone, but you can’t officially buy it in the U.S. And this is apparently due to the lack of stability regarding U.S. tariffs, and various anti-drone legislation pushed by politicians here, even though police and those other public safety organizations, and other groups, keep explaining that they really don’t feel that they have affordable, practical substitutes with similar capabilities and support, that could replace those DJI drones.

Science and technology matter. And falling behind in these areas, whether due to funding for generative AI hype starving other projects of resources, or due to ideological disagreements unrelated to the actual science research projects themselves, or for any other reasons, could end up not only costing us financially, but directly impact us in terms of poorer health and lost lives, as wonderful breakthroughs that might otherwise have occurred slip away from us. And if we let that happen, future generations will probably not be looking back kindly on our behavior, and we couldn’t reasonably fault them for feeling that way.

–Lauren–

08-Apr-25

Well, the executive summary for this one is that we’re probably facing VERY significant price hikes across the board that are likely to seriously impact consumers, businesses, Internet firms that build those massive data centers, basically everybody. These technologies are of course now fundamental to our everyday lives.

The administration has now announced what would be a total tariff on China of over 100%. The fact is tariffs ARE effectively taxes and they’re paid by us in the importing country not by the exporting country. And part of what likely is driving a lot of confusion is that we’re often getting conflicting statements and conflicting ideas about what the goals of these tariffs are.

Are they to raise money? To punish countries for their own tariff regimes? To punish countries for trade imbalances? Some combination? Tariffs WILL raise money for the government, but again that tariff money is coming from us not from those other countries. And not all trade imbalances are necessarily horrible things, they can represent the fact that the U.S. is a relatively wealthy country that can choose how and where to obtain products the most economically, especially when making them locally isn’t really practical.

There are conflicting signals from the administration regarding whether the tariffs are negotiation tactics and/or if they’re intended to try drive manufacturing back to the U.S., and those goals also can easily be in conflict with each other.

It’s understandable why there’s nostalgia toward the period many years ago when the U.S. was a manufacturing powerhouse before it moved more into the services sector over the decades. But realistically that’s being somewhat viewed through rose-tinted 20/20 hindsight. Right now we’re a quarter of the way into the 21st century. Not just the U.S. but the entire global trade, manufacturing, and supply chains have utterly changed since way back then, in many ways significantly to the advantage of the U.S economy overall in the long run.

Now maybe in theory, if you were willing to spend enough on factories and had workers willing to work at wages similar to those paid in countries like China for example, and you were willing to wait the years necessary to build up those factories and infrastructures — maybe theoretically you could get some significant portion of that high tech manufacturing back, assuming stable economic signals from the government.

But is this practical? Well, there’s the rub. The infrastructure, the resources (some of which like rare earths are almost completely controlled by countries like China), engineering expertise, worker structures, and all the rest do not seem as if they’re likely to ever significantly return here anything like they once were.

Take the iPhone as just one example, because as I said, this affects these industries across the board. Something like 90% of iPhones are reportedly manufactured in China. It’s estimated that it would take three very disruptive years, and 30 billion dollars for Apple to move just 10% of their supply chain from Asia to the U.S.

And since you can’t reasonably expect U.S. workers to work for Chinese wages, plus so many other costs that are much higher here, you’d probably be looking at iPhones that could cost three times as much as they do currently.

Now the billionaires would still have those silly grins on their faces and couldn’t care less about much higher prices whether from tariffs or anything else. But for ordinary consumers and even firms of pretty much every size, the effects from the kinds of price increases we’re likely see from these tariffs on a vast array of tech products can’t help but have major negative impacts. The additional costs to consumers and businesses will likely be dramatic and could trigger many additional negative ripple effects.

In a short report like this I can’t really do more than address the tip of this giant iceberg, but the bottom line is that at least as far as the tech segment is concerned, it’s very difficult to find realistically optimistic aspects to any of this. We should keep our eyes open for any positive developments of course, but this is yet another one of those situations where it’s probably not a great idea to hold your breath.

–Lauren–

25-Mar-25

Social Security is in a DOGE-created crisis, and seniors are already at terrible risk.

DOGE moved quickly to order massive changes to Social Security, originally to essentially end all phone-based Social Security support, and then after major blowback to that — since so many people dependent on Social Security don’t use computers or have Internet — this was revised to continue phone support other than for changes to functions like payment accounts, and also for identification issues. Those crucial functions will no longer be doable by phone and will have to be by Internet — which again many of the people who need Social Security can’t use, or via in person visits to Social Security offices — which can be difficult or completely impossible for many elderly or disabled persons, especially in rural areas.

On top of this, DOGE ordered the closure of around 50 Social Security offices and the firing of thousands of their employees, so in person visits become even harder.

As I’ve said many times before, technical people often don’t really understand the situations that nontechnical people, especially older persons have to deal with. Often there’s a totally wrong assumption that pretty much EVERYBODY uses the Internet. But like I said, a large percentage of seniors do not use the Internet for anything like this, or at all.

Now DOGE originally said all of this was to fight fraud. But its early claims that 10s of millions of deceased persons over 100 years old were getting Social Security payments were apparently incorrect — it’s important to understand these systems — DOGE reportedly didn’t realize that those historical records did not mean all those dead people were getting payments, other aspects of the systems prevented payments to them.

And studies have shown that apparently improper Social Security payments amount to about 1% of overall payments, mostly errors not fraud, and 2/3 of mistaken payments were clawed back.

This all really erupted over the last few days when the administration’s new Commerce Secretary, billionaire Howard Lutnick, made some stupendously tone-deaf and clueless comments in an interview. He said that it’s fraudsters who would complain most loudly about missing Social Security payments, saying that his 94-year-old mother in law wouldn’t call to complain — she’d assume there was something messed up and she’d get her payment the next month.

That of course means having faith that the next payment won’t also fail to appear due to the same problem, but then again having a billionaire son-in-law probably would make that missed payment of somewhat less concern. Unfortunately, most Social Security recipients don’t have billionaire sons-in-law. He said cutting off payment system payments is the easiest way to find a fraudster, because whoever screams is the one stealing.

As you can imagine, Lutnick has been widely criticized for these statements. You really have to wonder what planet he’s been living on.

Because the reality is that 40% of retirees rely on Social Security as their sole source of income, and for many more it’s a primary source. You cut off Social Security from these retirees, even for just one month, either by declaring them dead when they’re still alive — reports of that are already increasing — or by making it impossible for them to quickly fix payment or identification problems by phone when they can’t travel to a Social Security office or use the Internet, and many won’t have any way to pay for food or lodging or anything else.

And these changes that are going to so negatively impact so many seniors dependent on Social Security, were only announced VERY recently and are being rushed into effect at the end of THIS month just a week from now, leaving seniors in an even worse situation, and many of them don’t have anyone locally to help them even if they had more time.

This situation has gone from bad to disastrous. Actually improving Social Security is indeed a good goal, but creating a massive mess that will leave so many vulnerable seniors at such risk, is both gruesome and utterly unacceptable.

–Lauren–

03-Feb-25

I have long held that efforts to tamper with Section 230  from the Communications Decency Act of 1996 are dangerously misguided. It is this section that immunizes online service providers from liability for third-party content that they carry. I have also argued that attempts to mandate “age verification” for social media will spectacularly backfire in ominous ways for social media users in general, and will not actually protect children — and I continue to believe that age verification systems cannot achieve their stated goals and will cause dramatic collateral damage.

One of my key concerns in both of these cases is that they would over time cause major social media platforms to drastically curtail the third-party content that they host, eliminating as much as possible that would be considered in any way controversial, in an effort to avoid liability.

I still believe that this is true, that this would be the likely outcome of Section 230 being altered in any significant ways and/or widespread implementation of the sorts of age verification systems under discussion.

But I’m now wondering if this would necessarily be such a bad outcome, because the large social media platforms appear to have increasingly eliminated all pretense of social responsibility, making it likely that the damage they have done over the years through the spreading of misinformation, disinformation, racism, and all manner of other evils will only be exacerbated — become much, much worse — going forward.

Seeing billionaire Mark Zuckerberg today proclaiming nonchalantly that he’s making changes to Meta platforms (Facebook, Instagram, etc.) that will inevitably increase the level of harmful content — he essentially said that explicitly — is I believe a “jumping the shark” moment for all major social media.

I feel it is time to have a serious discussion regarding potential changes to Section 230 as it applies to large social media platforms, with an aim toward forcing them to take responsibility for the damage the content on their platforms causes to society, whether it is third-party content or their own.

I would also add — though this extends beyond the formal scope of Section 230 and social media — that firms who have deployed Generative AI systems (chatbots, AI Overviews, etc.) should be held responsible for damage done by misinformation and errors in the content that those systems generate and provide to users.

It is obvious that the major social media platforms are at best now providing only lip service to the concept of social responsibility, or are effectively abandoning it entirely, for their own political and financial expediency — and the situation is getting rapidly worse.

We must make it clear to these firms that they serve us, not the other way around. Changes to Section 230 as it applies to the large social media platforms may be the most practical method to convince the usually billionaire CEOs of these firms that our willingness to be victimized has come to an end.

–Lauren–

I just had a good laugh. Someone asked me this morning how they could reach the “Google Ombudsman” for help with an account lockout issue. And I laughed not because their situation was funny, but because of the sad fact that I’ve been pushing for Google to establish an Ombudsman (or these days, often called Ombudsperson) role, for … well … decades. I’ve pushed from the outside. When I had the opportunity, I’ve pushed from the inside. Obviously, I never had any luck with this.

But I did get curious again today. For years, my essays on this topic ranked very high on Google Search. What about now?

Another laugh! I searched for:

google ombudsman

and a blog post of mine on this topic from 2009 is still on the first page of search results — 16 years later!

This was actually superseded by my more recent posts about this, such as 2017’s “Brief Thoughts on a Google Ombudsman and User Trust”:

https://lauren.vortex.com/2017/06/12/brief-thoughts-on-a-google-ombudsman-and-user-trust

But the story is still exactly the same as it was originally — Google has never been willing to budge on this issue, even as the need for such a role (or roles) has dramatically increased over the years, not just for issues related to account lockouts and other traditional Google user problems that cry out for valid escalation paths, but of course now related to the rapidly rising range of AI-related controversies.

The more things change, the more they stay the same.

Very sad.

–Lauren–

I’ll use very simple words for these government officials: You ban Chinese drones, you’re putting U.S. lives at risk.

Congress with bipartisan support very recently passed what is effectively a ban on import of (and perhaps, but less likely, the use of existing) Chinese drones such as those from market leader DJI, taking effect in a year against the firm if DJI can’t convince a government agency to certify that they are not a security risk — and of course, how DJI is supposed to accomplish this isn’t spelled out.

So now it gets even worse. The U.S. Commerce Department is considering its own Chinese drone bans, and has opened a public comment period through early March.

The absolute bull-headed STUPIDITY of these bans is beyond belief. There is no evidence extent that DJI drones present a security risk — only theoretical. politically-motivated speculation from both political parties that make virtually no sense at all.

The organizations and businesses that depend on these drones — law enforcement, search and rescue, agriculture, utilities, and a long list of others, have not found practical alternatives to DJI drones in the vast majority of cases. DJI dominates the market because they make the highest quality drones at prices these entities can afford, and provides world class support for them.

The politics of this situation are beyond disgusting. Is it too much to hope that the Trump administration will be more reasonable about this? Yeah, probably not a good bet, but being more sensible than both parties in Congress — and the current administration — on this score is a very low bar at this point.

Here is the current official Commerce URL with their announcement. This was not easy for me to find — not a single media source I saw bothered to include this crucial information!

https://www.bis.gov/press-release/commerce-issues-advance-notice-proposed-rulemaking-secure-unmanned-aircraft-systems

Absolute insanity. -L

According to reports, in a recent employee meeting, Google CEO Sundar said that (essentially) the entire focus of Google in 2025 will be AI and pushing it out to consumers in a “scrappy” way — with him referencing the early days of Google. This was when, I would note, their rush resulted in massive arrogance, and privacy problems at the firm were at a peak. Over the years these both were reduced — especially privacy issues where Google actually has become world class in terms of protecting users’ privacy (and security). Both could return to former terrible levels under Sundar’s deeply flawed AI approach.

With such a focus on AI and so much money between poured into it by Google, it is inevitable that other core Google services will ultimately suffer, and given Google’s notoriously deficient “customer service” when things go wrong — from account lockouts to a wide range of other problems — it’s only going to be getting worse. Word is that Google  teams not mostly devoted to AI are already suffering cutbacks. How long before Google decides that Gmail, etc. just aren’t worth keeping around anymore? Couldn’t happen? Think again.

Google’s AI continues to be an endless source of mediocrity and wrong, confused, and even utterly inane and nonsensical answers and false statements, and Google refuses to take responsibility for these and how they could negatively (sometimes even dangerously) impact users. This renders Google’s incredibly reckless pushes to embed AI deeply into Google Search (thanks to AI, decreasingly trustworthy), and the introduction of their new “AI Agents” (taking over web browsers on behalf of users — an enormous target for hackers and phishing attacks), both horrific risks for consumers.

Sundar would (I suspect) say that unless Google moves in this direction, Google is doomed. I believe that he and his executive team have it exactly backwards. Consumers do not want AI. The more they learn about it, the less they trust it or care for what it does in their day to day lives. They don’t want to pay for it. They don’t want it popping up in their faces constantly and being shoved down their throats. They certainly DO want Google to take 100% responsibility for what it does.

Sundar wants Google to be scrappy. We might as well delete that leading “s” — unfortunately, that’s Google’s likely fate, because his AI path will lead to Google’s almost certain doom one way or another. The exact timing and form of that doom cannot be accurately predicted right now, but billions of Google’s users will suffer in the process.

And one more thought. It’s been reported that apparently Sundar has now joined the exalted ranks of the billionaires. How that may or may not be affecting his thinking in these matters I’ll leave as an exercise for the reader.

Dark times for Google’s users, indeed.

–Lauren–

All the talk now is about using AI-based mechanisms to authenticate social media users as being not underaged for access, through analysis of their faces on video feeds. The multitude of ways in which this could fail in both directions (declaring faces either older than they really are, or younger than they really are, not to mention how you determine from a face if someone is 15.5 or 16 years old when the minimum age required for access is 16) are far too many to even list here.

But given all of the attention, I feel that we need terminology to quickly describe the entire area of bypass techniques targeting these age verification/gating systems.

I propose the term:

BALOK

As in, “The 11-year-old easily baloked the system and gained quick access.”

or:

The free software was capable of baloking the ID portal within seconds to bypass the age restrictions.

BALOK is an acronym for:

Bypassing Age Locked Online Keys

Of course, fans of the original “Star Trek” already know what’s really going on.

Balok was an alien in the first season of “Star Trek” from an episode called “The Corbomite Maneuver”. In appearance he was a very young, vulnerable child. But in his audio and video communications with the Enterprise ship, he employed an artificial booming voice and what turned out to be a menacing appearing puppet to fool the Enterprise crew into fearing him.

The parallels with the current face ID age verification systems are obvious.

Children will be baloking the social media age gating systems in a myriad number of ways, while adults who were supposed to have access will be blocked due to both face analysis errors and technology access problems. Not everyone uses smartphones with cameras to access social media, and many people rightly fear sending video images of their faces to these or other firms due to justifiable concerns about potential abuses.

I anticipate both freeware baloking software and baloking as a (largely free) service. Kids will band together in groups to develop new baloking techniques. They are extremely resourceful when it comes to these areas, more so than the vast majority of adults.

Balok knew that it was easy to fool his potential adversaries with a faked persona. The ingenuity of kids today pretty much guarantees that their own efforts to balok the social media firms, and in essence the politicians who pushed age blocks in the first place, will be even more successful in the real world.

–Lauren–

Is this happening around the world, or is it only here in the USA that everything appears to be going totally nutso? Seemingly all at once, politicians of both parties look and sound like they’ve given up all pretense of being educated human beings and are behaving like infantile idiots with political agendas. Oh boy, what a mix.

Logic? Forget about it! Pandering to fear and nonsense? That’s the way to win elections!

We don’t have much clearer examples of this than two simultaneous situations involving drones.

First, as you probably know by now, there has been a hysterical panic in New Jersey and surrounding areas about supposed swarms of mysterious “drones”. All evidence to date is that this is entirely nonsense, fed by clickbait social media, opportunistic mainstream media, and politicians in both parties out to seize an opportunity to score political points from people’s ignorance about technical realities.

So far, other than legal hobby and commercial drones that are routinely in the air — there are something over a million licensed in the U.S. — people have been reporting as “mystery drones” various shaky, blurry images of stars, helicopters, and airplanes (maybe the green and red flashing lights and the white strobe lights give them away, huh?), plus all manner of other completely ordinary stuff that most people just never notice most of the time. And you have politicians like Democratic Senator Chuck Schumer irresponsibly trying to ram a new surveillance bill through the Senate to protect us from this nonexistent threat — Republican Senator Rand Paul blocked him. When we have to depend on Rand Paul to be the sensible one, we must be in The Twilight Zone.

Politicians in both parties including Trump have been making all manner of claims feeding the drone hysteria — based on nothing real, and calling for shooting down the supposed “drones” if they “can’t” be identified, putting the lives of pilots and passengers on ordinary plane flights at risk. People have been shining lasers at planes — a criminal offense — again risking pilots and passengers.

The whole thing is totally nuts. It’s reminiscent of a notorious panic in Bellingham, Washington in 1954, when people started noticing ordinary manufacturing defects in car windshields and mass hysteria broke out with people fearing it was nuclear radiation or some other kind of attack. I’m not kidding. Google it.

The drone panic wasn’t helped by the sluggish reaction of government agencies to speak clearly to the issue, but the fact that there were no collisions between supposed drones and other air traffic spoke volumes about the ridiculous nature of the entire situation. The FAA has now issued some temporary drone flight restrictions in various areas of New Jersey, to try calm things down even further. But if agencies had gotten ahead of this issue early on, the information vacuum might not have been filled with so much ridiculous nonsense.

One of the best new videos I’ve seen explaining the current drone hysteria is:

https://www.youtube.com/watch?v=MAWCIfs0ER4

I strongly recommend that it be widely viewed.

Meanwhile, the political hysteria over Chinese drone maker DJI’s drones as a claimed national security risk — with absolutely no evidence of this being presented — has reached a bizarre and dangerous inflection point in Congress.

DJI holds a very large majority of the U.S. drone market not just for hobbyists but in the absolutely crucial areas of law enforcement, search and rescue, other public safety groups, agriculture, utilities, and many other areas of society. The reason is simple — these groups have not found practical competing products from other manufacturers that meet the quality, reliability, and service support levels that DJI routinely provides. DJI drones are used in myriad areas to directly support the protection of human lives and property, keeping critical infrastructure operating, and an almost endless list more.

Still, some politicians in both parties keep screaming at the top of their lungs that DJI’s drones must be banned, no matter how many lives are lost or hurt in the process. Again, there is zero evidence that has ever been presented that these drones are a security risk, and DJI has bent over backwards to demonstrate that they do not threaten security. But trying to logically argue with politicians who have their own agendas (e.g., by pointing out to them that a foreign power could just buy satellite surveillance photos — they don’t need to “spy” through commercial drones!) is like debating a moldy sponge. All you get for your efforts is a rotting odor.

It was thought that the current defense appropriation bill might push through a DJI ban. This was likely to include DJI drones, cameras, audio equipment, and other products — either import bans alone, or more likely import bans combined with telling the FCC to prohibit their use of U.S. radio frequencies, which could also in theory — but probably not in practice — block use of these DJI products already received and in routine use in the USA.

Instead, with so many crucial public safety and other groups opposed to the ban, the final language puts off a ban for a year, and says to avoid the ban DJI must get an appropriate national security agency to certify that their products are not a security risk.

Proving a negative is always, uh, challenging. But worse — and this is something straight out of Putin’s Russia — the language does not say which national security agency should do this or require any of them to do it. Franz Kafka would love this. Putin would smile.

It’s possible that the next administration will be more receptive to logical arguments about why DJI products should not be banned, and if the ban moves forward DJI is virtually certain to litigate through the courts, as well they should.

But the sheer irresponsibility of politicians wanting to ban such crucial products based on zero evidence and a lot of wild-eyed political posturing is nothing short of disgusting and irresponsible.

So here we are. Blurry photos of stars and planes are being touted as terror drones, with politicians more than happy to latch onto the panic for their own purposes. Actual drones crucial to a vast array of industries and to saving lives are at risk of being banned by politicians who scream “national security” without evidence.

Yeah, I don’t know about the rest of the world, but here in the USA it sure looks like we’ve fallen off the deep end of sanity.

–Lauren–

 

Ah yes. Poodle skirts and bobby socks.  Jimmie Rodgers and the Everly Brothers. Around the world, there seems to be a collective longing for a rose-tinted, 20/20 hindsight, fantasy view of “the good old days” of the 1950s, before those damned computers starting infiltrating so much of our lives.  And social media bans have become the means by which governments hope to force children off their phones and back to sometimes rather violent competitive sports and other ultraviolet light suffused outdoor activities.

It won’t work. The latest example of this yearning for the past is in Australia where, with very broad public support, the government just pushed through (in about a week!) a ban on children under 16 using social media. There are no exceptions for anyone with current accounts. There are no exceptions to allow parents to permit their children to use social media if the parents determine that’s best for their own children. The ban likely will include all of the major social media platforms except (for now at least) YouTube, which is widely used in schools.

Clearly, there have been enormously tragic incidents involving children who were, for example, bullied or otherwise abused over social media. But there are also many examples of the positive benefits of social media helping children who were being abused by family members, for whom access to assistance over social media was crucial. And many examples of isolated children for whom social media has been an important benefit to their mental health. And children who have created educational outreach and other extremely positive projects via social media.

I’m not a sociologist. I’ll leave it to the experts in that and related fields to explain the complex and sometimes competing aspects of social media and young persons.

But I am a technologist. And as such, my view is that Australia’s ban almost certainly won’t work and will end up doing far more damage than the status quo before the law, as it creates a culture of false hopes, push back, and circumvention.

Like all social media age gating laws, the Australian law would require ALL users of social media to be age verified. That’s how you (in theory) block the children. The law wisely does not penalize parents or children who circumvent the law, instead depending on financial fines against the social media firms. And at the very last minute, a provision was apparently added that prohibits requiring use of government credentials for identification. This was a positive change, because as I’ve discussed many times, age-verification based on government credentials for websites access would lead almost inevitably to broad tracking of Internet usage by the government in much the style that users in China are subjected to today.

So how would Australia do age verification for this law? The law is planned to take effect a year from now, and an age verification trial is supposed to take place before then. Most frequently discussed are AI-based (oh boy, here we go …) techniques to analyze users’ faces, online behavior patterns, types of content they access … and so on. 

It doesn’t take much imagination to create a long list of ways that such techniques not only have errors in both directions (passing users who were too young, blocking users who were actually old enough) — even in the absence of circumvention techniques. E.g., how do you determine if a child is 15 and a half years old or 16 years old from their face? Uh huh. Hell, I’ve known people who were 30 and had faces that looked like they were 15.

But even beyond the mumbo jumbo of supposed AI-based solutions, the list of relatively straightforward circumvention techniques seems almost endless. And anyone who thinks that children won’t figure this stuff out are in for a rude awakening.

One obvious problem for the law will be VPNs. Unless the Australian government plans to detect/ban VPN usage — which would have enormous negative consequences — simply creating accounts on these social media platforms that appear to be coming from countries other than Australia is an obvious circumvention methodology.

Attempting to ban children from social media won’t work. It will make a complicated situation even worse, and it technically is impractical without creating a hellscape of government-verified identity Internet usage tracking for all users of all ages — and even then circumvention techniques would still exist.

The desire to eliminate the negative consequences of social media is a laudable one. And there’s much that could be done by social media firms to better prevent abuse of their platforms, especially when children are targeted for such abuse. 

But age-based bans are a “feel good” effort that will create new harms and will fail. They should be firmly rejected.

–Lauren–

Despite my continuing differences with various specific aspects of Google operations that I feel could be straightforwardly improved to the benefit of their users, I can’t emphasize enough what an utter disaster the DOJ proposed Google antitrust “remedies” would be for the privacy and security of their users and consumers more broadly, and for the overall usability of these crucial services as well.

Google privacy and security standards and teams are world class, and I have enormous trust in them. Keeping email and the many other Google services that billions of people rely on in their everyday lives safe and secure is an enormously complex and continually evolving effort, and key to this — as well as making sure that users’ data entrusted to Google is not put at risk by firms with less stringent standards than Google — is the integrated nature of the Chrome browser, Android, and other aspects of Google services. Even with this integration, it’s a monumental task.

Breaking these aspects of Google apart in the name of supposed “competition” — that would actually only make most non-technical users’ interactions with tech more confusing and complicated, just what consumers clearly don’t want — would be a gargantuan mistake that consumers would unfortunately end up paying for in a myriad number of ways for many years.

Google is far from perfect, but DOJ seems hell-bent on pushing an antitrust agenda in this case that would make consumers’ lives far worse instead of better. Whether that’s a result of DOJ ignoring the technical realities in play or simply not really understanding them, it’s the wrong path and would lead to a very bad place indeed for all of us.

–Lauren–

Despite my ongoing concerns over various of the directions that current management has been taking Google over recent years, I must state that I agree with Google that the kinds of radical antitrust “remedies” — and “radical” is the appropriate word — apparently being contemplated by DOJ, would almost certainly be a disaster for ordinary users’ privacy, security, and overall ability to interact with many aspects of related technologies that they depend on every day.

These systems are difficult enough to keep reasonably user friendly and secure as it is — and they certainly should continue to be improved in those areas. But what DOJ is reportedly considering would be an enormous step backwards and consumers would be the ultimate victims of such an approach.

–Lauren–

“I Am the Very Model of a Google AI Overview”
Lauren Weinstein

To the tune of “I Am the Very Model of a Modern Major-General” (with apologies to Gilbert & Sullivan)

– – –

I am the very model of a Google AI Overview.
I know what you’ll be searching for,
At least an hour ahead of you.

My answers aren’t always right,
In fact they’re often quite a brawl.
But hey we’re Google and you’re here,
So that’s the way the chips will fall.

We really don’t like those blue links,
They’re so old-fashioned we agree.
Why bother sites with viewers,
When users can just come here to me?

Of course some sites may suffer,
And yeah that’s a bit tragic to see.
But while we aren’t evil,
Face the facts it’s all about money!

Now if your Google search results,
No longer seem of quality,
It’s not our fault,
The problem is,
Your queries are just all lousy.

So welcome to my AI world,
An LLM can’t think things through,
I am the very model of a Google AI Overview.

– – –

–Lauren–

Google Search AI Overview answers are clearly not ready for prime time. Google should immediately return them to Lab (opt-in) status. Before deploying them again generally beyond Labs, there needs to be an opt-out option — permanent until changed by the user — at least for logged-in users.

–Lauren–

The technical term for what’s happening now with Artificial Intelligence, especially generative AI, is NUTS. I mean it’s not just Google, but Microsoft too, with OpenAI’s ChatGPT. The firms are just pouring out half-baked AI systems and trying to basically ram them down our throats whether we want them or not, by embedding them into everything they can, including in irresponsible or even potentially hazardous ways. And it’s all in the search of profits at our expense.

I’ll talk specifically about Google Search shortly, but so much of this crazy stuff is being deployed. Microsoft wants to record everything you do on a PC through an AI system. Both Google and Microsoft want to listen in on your personal phone calls with AI. YouTube is absolutely flooded with low quality AI junk videos, making it ever harder to find accurate, useful videos.

Google is now pushing their AI “Help me write” feature which feeds your text into their AI from all over the place including in many Chrome browser context menus, where in some cases they’ve replaced the standard text UNDO command with “Help me write”. And Help me write is so easy to trigger accidentally that you not only could end up feeding personal or business proprietary information into the AI, but also to the human AI trainers who Google notes can also see this kind of data.

OK, now about Google Search. For quite some time now many people have been noticing a decline in the quality of Google search results — and keep in mind that Google does the overwhelmingly vast percentage of searches by Internet users. So Google has recently been rolling out to regular Google Search results what they call AI Overviews, and these are AI-generated answers to what seem like most queries now, that can push all the actual site links — the sites from which Google AI presumably pulled the data to formulate those answers — actually push them so far down the page that few users will ever see them, and this potentially starves those sites that provided that data from getting the user views they need to stay up and running.

Some of the AI overview answers have links but often they’re dim and obscure and almost impossible to even see unless you have perfect 20/20 vision and very young eyes. On top of that many of these AI Overview answers are just banal, stupid, and often just confused or plain wrong, mixing up accurate and inaccurate information, sometimes in ways that could actually be unsafe, for example when they’re wrong about health-related questions. This is all very different from the kinds of top of page answers that Google has shown for straightforward search queries like math questions or definitions of words or when was a particular film released that they’ve provided for some time now.

These AI Overview answers are showing up all over the place and like I said, much of the time their quality is abysmal. Now of course if you’re not knowledgeable about a subject you’re asking about, you might assume a misleading or wrong AI Overview answer is correct, and since Google has now made it less likely that you’ll scroll down the page to find and visit sites that may have accurate information, it’s a real mess. There are some tricks with Google Search URLs that I’ve seen to bypass some of this for now, but Google could disable those at any time.

What’s really needed is a way for users to turn all of this generative AI content completely off until such a time, if ever, that a given user decides they want to turn it on again. Or better yet, these AI features should be ENTIRELY opt-in, that is, turned off UNTIL you decide you want to use them in the first place.

So once again we see that fears of super intelligent AIs wiping out humanity are not what we should be worried about right now. What we need to be concerned about are the ways that Big Tech AI companies are hell-bent on forcing generative AI systems into all aspects of our private lives in ways that are often unwanted, confusing, irresponsible, or even worse. And the way things seem to be going right now, there’s no indication that these firms are interested in how we feel about all this.

And that’s not going to change so long as we’re willing to continue using their products without making it clear to them that we won’t indefinitely tolerate their push to stuff generative AI systems into our lives whether we want them there or not.

–Lauren–

Evil [ 18-May-24 5:29pm ]

Every day it becomes ever clearer. All the talk of super-intelligent AI destroying humanity was and is nonsense. It’s the CEOs running the Big Tech firms pushing AI into every aspect of our lives in increasingly irresponsible and dangerous ways who are the villains in this saga, not the machines themselves. The machines are just tools. Like hammers, they can be used to build a home or smash in a skull.

For evil, you have to look to humans and their corporate greed.

–Lauren–

Let me be very clear about why I am, frankly, so angry with Google over their Account Recovery failures.

I have on numerous occasions directly proposed to Google a variety of significant improvements to their current Account Recovery processes.

While their existing procedures successfully recover many accounts daily, they tend to fail disproportionally for innocent non-techie users and other marginalized groups like seniors and more — users who still are dependent on Google for email and data storage in a world where other support options (like telephone and non-email billing and support) are being rapidly marginalized by firms as cost-cutting measures.

These are often users who barely understand how to use these systems that they’ve in many cases essentially been coerced into using. When they’re locked out, they can lose everything — email, photos, and other personal data crucial to their lives.

I have on multiple occasions proposed specific improvements to Google’s procedures that could be invoked optionally by users who desperately needed access to their accounts that were locked out without good cause, and methods by which Google would have the means for cost recovery of the additional (and typically not extensive) additional support measures required to accomplish this.

My proposals have never received serious consideration by Google. I always receive the same responses. “We recover lots of accounts and that’s good enough.” “Nobody is forced to use Google.” “People who don’t properly maintain their recovery addresses and phone numbers have nobody but themselves to blame.”

Unspoken and unwritten but clearly part of the underlying message: “We just don’t care about those categories of users. Hopefully they’ll go away and never come back.”

It’s a travesty. I’ll keep trying, because hope springs eternal, and I’m too old now to give up on even apparently hopeless causes. Silly me, I guess. Take care.

–Lauren–

Google and Seniors [ 09-May-24 7:57pm ]

Google refuses to create a specific role for someone to oversee the issues of older users, who depend on Google for so many things but so often get the shaft and lose everything when something goes wrong with their accounts. Google should AT LEAST (I still think the role is crucial), be providing focused help resources and a recurring (at least monthly) blog to help this class of users (“Google for Seniors”, “Google Seniors Blog”).

This would all be specifically oriented toward helping these users deal with the kinds of Google Account and other Google problems that so often disproportionately affect this group.

This would be good for these users (who Google unreasonably and devastatingly considers to be an unimportant segment of their user base) and frankly good for Google’s PR in a highly challenging and toxic political environment.

I’m so tired of having so many people in this category approach me for help with account and other Google issues because they never understood the existing Google resources that, frankly, are written for a different level of tech expertise and understanding.

I have more detailed thoughts on this if anyone cares. No, I’m not holding my breath on this one.

–Lauren–

06-Feb-24
About Google and Location Privacy [ 16-Dec-23 5:52pm ]

You may have seen a lot of press over the last few days about Google moving location data by default to be on-device (e.g., your phone) rather than stored centrally (and encrypted if you choose to store it centrally), and how this will help prevent abuses of broad “geofence” warrants that law enforcement uses to get broad data about devices in a particular specified area.

These are all positive moves by Google, but keep in mind that Google has long provided users with control over their location history — how long it’s kept, the ability for users to delete it manually, whether it’s kept at all, etc.

But when is the last time your mobile carrier offered you any control over the detailed data they collect on your devices’ movements? If you’re like most people, the answer seems to be never. And while cellular tracking may not usually be as precise as GPS, these days it can be remarkably accurate.

One wonders why there’s all this talk about Google, when the mobile carriers are collecting so much location data that users seem to have no control over at all, data that is of similar interest to law enforcement for mass geofence warrants, one might assume.

Think about it.

–Lauren–

As you may know, Google has recently begun a protocol to delete inactive Google accounts, with email notices going out to the account and recovery addresses in advance as a warning.

Leaving aside for the moment the issue that so many people who have lost track of accounts probably have no recovery address specified (or an old one that no longer reaches them), there’s another serious problem.

A few days ago I received a legitimate Google email about an older Google account of mine that I haven’t used in some time. I was able to quickly reauthenticate it and bring it back to active status.

However, this may be the first situation (there may be earlier ones, but I can’t think of any offhand) where Google is actively “out of the blue” soliciting people to log into their accounts (and typically, older accounts that I suspect are more likely not to have 2-factor authentication enabled, for example).

This is creating an ideal template for phishing attacks.

We’ve long strongly urged users not to respond to emailed efforts to get them to provide their login credentials when they have not taken any specific actions that would trigger the need for logging in again — and of course this is a very common phishing technique (“You need to verify your account — click here.” “Your password is expiring — click here.”, etc.)

Unfortunately, this is essentially the form of the Google “reactivate your account” email notice. And for ordinary busy users who may get confused to see one of these pop into their inbox suddenly, they may either ignore them thinking that they are a phishing attack (and so ultimately lose their account and data), or may fall victim to similar appearing phishes leveraging the fact that Google is now sending these out.

I’ve already seen such a phish, claiming to be Google prompting with a link for a login to a supposedly inactive account. So this scenario is already occurring. The format looked good, and it was forged to appear to be from the same Google address as used for the legitimate Google inactive account notification emails.  Even the internal headers had been forged to make it appear to be from  Google. The top level “Received from” header line IP address was wrong of course, but how many people would notice this or even look at the headers to see this in the first place?

I can think of some ways to help mitigate these risks, but as this stands right now I am definitely very concerned. 

–Lauren–

Last February, in:

Giving Creators and Websites Control Over Generative AI
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

I suggested expansion of the existing Robots Exclusion Protocol (e.g. “robots.txt”) as a path toward helping provide websites and creators control over how their contents are used by AI systems.

Shortly thereafter, Google publicly announced their own support for the robots.txt methodology as a useful mechanism in these contexts.

While it’s true that adherence to robots.txt (or related webpage Meta tags — also part of the Robots Exclusion Protocol) is voluntary, my view is that most large firms do honor its directives, and if ultimately moves toward a regulatory approach to this were deemed genuinely necessary, a more formal approach would be a possible option.

This morning Google ran a livestream discussing their progress in this entire area, emphasizing that we’re only at the beginning of a long road, and asking for a wide range of stakeholder inputs.

I believe of particular importance is Google’s desire for these content control systems to be as technologically straightforward as possible (so, building on the existing Robots Exclusion Protocol is clearly desirable rather than creating something entirely new), and for the effort to be industry-wide, not restricted to or controlled by only a few firms.

Also of note is Google’s endorsement of the excellent “AI taxonomy” concept for consideration in these regards. Essentially, the idea is that AI Web crawling exclusions could be specified by the type of use involved, rather than by which entity was doing the crawling. So, a set of directives could be defined that would apply to all AI-related crawlers, irrespective of who was doing the crawling, but permitting (for example) crawlers that are looking for content related to public interest AI research to proceed, but direct that content not be taken or used for commercial Generative AI chatbot systems.

Again, these are of course only the first few steps toward scalable solutions in this area, but this is all incredibly important, and I definitely support Google’s continuing progress in these regards.

–Lauren–

As per requests, this is a transcript of my national network radio report earlier this week regarding Google passkeys and Google account recovery concerns.

 – – –

So there really isn’t enough time tonight to get into any real details on this but I think it’s important that folks at least know what’s going on if this pops up in front of them. Various firms now are moving to eliminate passwords on accounts by using a technology called “passkeys” which bind account authentication to specific devices rather than depending on passwords.

And theoretically passkeys aren’t a bad idea, most of us know the problems with passwords when they’re forgotten or stolen, used for account phishing — all sorts of problems. And I myself have called for moving away from passwords. But as we say so often, the devil is in the details, and I’m not happy with Google’s passkey implementation as it stands right now. Google is aggressively pushing their users currently, asking if they want to move to a passwordless experience. And I’m choosing not to accept that option right now, and while the choice is certainly up to each individual, I myself don’t recommend using it at this stage.

Without getting too technical, one of my concerns is that anyone who can authenticate a device that has Google passkeys enabled on it, will have full access to those Google accounts without having to have any additional information — not even an additional authentication step. And this means that if — as is incredibly common — someone with a weak PIN for example on their smartphone, loses that device or it’s stolen, again, happens all the time, and the PIN was eavesdropped or guessed, those passkeys could let a culprit have full access to the associated Google accounts and lock out the rightful owner from those accounts before they had a chance to take any actions to prevent it.

And I’ve been discussing my concerns about this with Google, and their view — to use my words — is that they consider this to be the greatest good for the greatest number of people — for whom it will be a security enhancement. The problem is that Google has a long history of mainly being concerned about the majority, and leaving behind vast numbers of users who may represent a small percentage but still number in the millions or more. And these often are the same people who through no fault of their own get locked out of their Google accounts, lose access to their email on Gmail, photos, other data, and frankly Google’s account recovery systems and lack of useful customer service in these regards have long been a serious problem.

So I really don’t want to see the same often nontechnical folks who may have had problems with Google accounts before, to be potentially subjected to a NEW way to lose access to their accounts. Again it’s absolutely an individual decision, but for now I’m going to skip using Google passkeys and that’s my current personal recommendation.

–Lauren–

Google continues to push ahead with its ill-advised scheme to force passkeys on users who do not understand their risks, and will try push all users into this flawed system starting imminently.

In my discussions with Google on this matter (I have chatted multiple times with the Googler in charge of this), they have admitted that their implementation, by depending completely on device authentication security which for many users is extremely weak, will put many users at risk of their Google accounts being compromised. However, they feel that overall this will be an improvement for users who have strong authentication on their devices.

And as for ordinary people who already are left behind by Google when something goes wrong? They’ll get the shaft again. Google has ALWAYS operated on this basis — if you don’t fit into their majority silos, they just don’t care. Another way for Google users to get locked out of their accounts and lose all their data, with no useful help from Google.

With Google’s deficient passkey system implementation — they refuse to consider an additional authentication layer for protection — anyone who has authenticated access to your device (that includes the creep that watched you access your phone in that bar before he stole it) will have full and unrestricted access to your Google passkeys and accounts on the same basis. And when you’re locked out, don’t complain to Google, because they’ll just say that you’re not the user that they’re interested in — if they respond to you at all, that is.

“Thank you for choosing Google.”

–Lauren–

In the 2005 film “V for Vendetta” a fictional UK government has turned into a tightly censored, tracked, and controlled hellscape, with technology used to control citizens in every way possible. The UK has now taken a massive step toward making that horror a reality, with the passage of likely the most misguided legislation in the country since the Norman invasion of 1066.

I won’t detail their Online Safety Bill here — you can find endless references by searching yourself — but the vast, blurry, nebulous, misguided rules for “protecting children from ‘harmful’ content” — a slippery slope bad enough on its own, quickly expanded into a Chinese Internet style virtual steel collar for every UK resident, chained to the government in every aspect of their online lives.

The mandated social media platform ID age verification requirements, which will ultimately require the showing of government IDs for access to sites, alone will create the opportunity for virtually every action of every user of the Internet in the UK to be tracked by the government and its minions in ever expanding ways over time.

Be careful what sites you visit or what you ask or say on them. In China, you can simply vanish under such circumstances. And in the UK? Similar disappearances coming soon, perhaps, as every site you visit, no matter the topic related to business, medical concerns, or other aspects of your family’s private and personal life, will ultimately be linked to you in government databases.

VERY similar *bipartisan* legislative efforts are taking place here in the U.S., though the U.S. court system is creating additional hurdles for their perpetrators here, at least for the moment. For now.

While some activists and legislators spend their time ranting about Internet advertising, governments around the world are working to turn the Internet into a pervasive tool for tracking your every online move and thought, permanently linked to your government IDs.

We’ve seen it in Communist China. Now we see it in so-called democracies.

Open your eyes — while you still can. 

–Lauren–

25-Aug-23

I love YouTube. I consider it to be a wonder of the world for an array of reasons. Its scale is — well, the technical term is “mindbogglingly enormous.” I subscribe to YouTube Premium (primarily to obliterate the ads — I don’t use ad blockers), and as far as I’m concerned it’s the best streaming service value on the planet. If I had to choose one streaming service only — it would be YouTube Premium, undoubtedly. I have something approaching 7000 favorited videos on YT, and I sometimes imagine that there’s a whole cluster in a dark corner of a Google data center singularly devoted to managing my giganormous watch history.

Does YT have problems? Yup. Some YT creators have to deal with inappropriate strikes and takedowns — I’ve tried to assist a bunch of these users with these sorts of disruptions over the years. Some people complain of bad video suggestions pushing them in dark directions — though this has never been an issue for me — the suggestions I get are generally great, though I do take time to train the algorithm as to what I do and don’t like. If you just use YT not-logged in and/or don’t train, you’ll probably get less favorable results. Basically that’s your choice.

Obviously, no technology is perfect, and at YT’s scale even if only a tiny fraction of suggestions are problematic, it can still be a large number in absolute terms. That’s life. I still love YouTube.

There’s an oddity though with YT that I think is worth mentioning. It’s not a big concern in the scheme of things, but it really shouldn’t be happening.

This relates to the YouTube Premium “Family Plan” that lets you bundle multiple separate Google accounts in a household together so that they all have the benefits of Premium, at a better price than each subscribing to Premium separately. Under FP, each of the associated accounts is free of ads, etc., but is still separate — with their own YT play history, etc. — and can view different content simultaneously (normally, a Premium account can only view content on one device at a time). 

But a strange thing can happen with Family Plan. The videos being watched by one account on the plan can affect the suggestions on other accounts on the plan, even though they should be entirely separate in this particular respect.

This is most often noticed when a topic starts to pop up in the suggestions for one FP member that are totally odd for them — for example, a subject that they never view videos about. And it turns out — if the members of the FP compare notes — that some other member of the plan was watching videos on that topic, and the YT videos/channels being watched by FP member A are showing up in the suggestions for FP member B. And so on.

Most of the time this isn’t a serious concern, and can even be interesting in terms of surfacing new topics. But of course there are intrinsic privacy considerations as well. It isn’t good policy for the YT viewing habits of different family members to be intermingled in that way, without their specifically asking for such sharing. The potential family problems that could occur as a result in some cases are fairly obvious.

This has been going on with Family Plan for years, and I’ve brought this up with Google/YT myself in the past. And the responses I’ve always gotten back have either been that “it can’t happen” or “it shouldn’t happen” and … that’s pretty much where it’s been left hanging each time.

But it does still happen (I have a new report just this morning) and yeah, it really shouldn’t.

Again, not an enormous problem in the scheme of things, but not trivial either, and it’s something that definitely should be fixed.

–Lauren–

Suddenly there seems to be an enormous amount of political, regulatory, and legal activity regarding AI, especially generative AI. Much of this is uncharacteristically bipartisan in nature.

The reasons are clear. The big AI firms are largely depending on their traditional access to public website data as the justification for their use of such data for their AI training and generative AI systems.

This is a strong possibility that this argument will ultimately fail miserably, if not under current laws then under new laws and regulations likely to be pushed through around the world, quite likely in a rushed manner that will have an array of negative collateral effects that could actually end up hurting many ordinary people.

Google for example notes that they have long had access to public website data for Search.

Absolutely true. The problem is that generative AI is wholly different in terms of its data usage than anything that has ever come before.

For example, ordinary Search provides a direct value back to sites through search results pages links — something that the current Google CEO has said Google wants to de-emphasize (colloquially, “the ten blue links”) in favor of providing “answers”.

Since the dawn of Internet search sites many years ago, search results links have long represented a usually reasonable fair exchange for public websites, with robots.txt (Robots Exclusion Protocol) available for relatively fine-grained access control that can be specified by the websites themselves, and which at least the major search firms generally have honored.

But generative AI answers eliminate the need for links or other “easy to see” references. Even if “Google it!” or other forms of “more information” links are available related to generative AI answers at any AI firm’s site, few users will bother to view them.

The result is that by and large, today’s generative AI systems by their very nature return essentially nothing of value to the sites that provide the raw knowledge, data, and other information that powers AI language/learning models. 

And typically, generative AI answers (leaving aside rampant inaccuracy problems for now) are like high school term papers that haven’t even included sufficient (if any) inline footnotes and comprehensive bibliographies with links.

A very quick “F” grade at many schools.

I have proposed extending robots.txt to help deal with some of these AI issues — and Google also very recently proposed discussions around this area.

Giving Creators and Websites Control Over Generative AI:
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

But ultimately, the “take — and give back virtually nothing in return” modality of many AI systems inevitably leads toward enormous pushback. And I do not sense that the firms involved fully understand the cliff that they’re running towards in a competitive rush to push out AI systems long before they or the world at large are ready for them.

These firms can either grasp the nettle themselves and rethink the problematic aspects of their current AI methodologies, or continue their current course and face the high probability that governmental and public concerns will result in major restrictions to their AI projects — restrictions that may seriously negatively impact their operations and hobble positive AI applications for users around the world long into the future.

–Lauren–

Thoughts on AI Regulation [ 29-Jun-23 5:56pm ]

Greetings. The excellent essay:

https://circleid.com/posts/20230628-the-eu-ai-act-a-critical-assessment

(by Anthony Rutkowski) serves to crystallize many of my concerns about the current rush toward specific approaches to AI regulation before the issues are even minimally understood, and why I am so concerned about negative collateral damage in these kinds of regulatory efforts.

There is widespread agreement that regulation of AI is necessary, both from within and outside the industry itself, but as you’ve probably grown tired of seeing me write, “the devil is in the details”. Poorly drafted and rushed AI regulation could easily do damage above and beyond the realistic concerns (that is, the genuine, non-sci-fi concerns) about AI itself.

It’s understandable that the very rapid deployments of AI systems — particularly generative AI — are creating escalating anxiety regarding an array of related real world controversies, an emotion that in many cases I obviously share.

However, as so often happens when governments and technologies intersect, the potential for rushed and poorly coordinated actions severely risks making these situations much worse rather than better, and that’s an outcome to be avoided. Given what’s at stake, it’s an outcome to be avoided at all costs.

I don’t have any magic wands of course, but in future posts I will discuss aspects of what I hope are practical paths forward in these matters. I realize that there is a great deal of concern (and hype) about these issues, and I welcome your questions. I will endeavor to answer them as best I can. 

–Lauren–

This post could get very long very quickly, so instead I’m going to endeavor to keep this introductory discussion brief, with an array of crucial details to come later. 

In my recent posts:

An Example of a Very Sad Google Account Recovery Failure — and How It Affects Real People

https://lauren.vortex.com/2023/05/17/google-account-recovery-failure-sad

and:

Potentially Serious Issues with Google's Announced Inactive Accounts Deletion Policy

https://lauren.vortex.com/2023/05/16/google-inactive-accounts-deletion

(and frankly, in many related postings over many years in this blog and other venues), I discussed the continuing problems of honest Google users being locked out of their Google accounts, often with a total and permanent loss of all their data (Gmail, photos, Drive files, etc.) that they entrusted to Google.

These lockouts can occur for an array of reasons — problems with login credentials, third-party hacking of accounts including (but not limited to) malware, Google believing that violations of its Terms of Service have occurred, and many other events.

Each of these is an entire complex topic area that I won’t detail in this post.

But the bottom line is that many Google users who feel that they have done nothing wrong find themselves locked out of their accounts — and crucially — their data at Google, and are unable to successfully navigate the existing largely automated account recovery procedures that Google currently provides.

Generally speaking, once a user who has been locked out of a Google account reaches this point, they are, to use the vernacular, SOL — there’s no way to proceed. Usually their data, no matter how important and precious to their lives, is lost to them forever.

To be sure, sometimes the failure to recover a Google account is rooted in the failure of users to provide or keep up to date the recovery information that Google requests for the very purpose of easing account recovery paths.

But the reality is that many users forget about keeping these current, or are reluctant to provide phone numbers and/or alternative email addresses (if they even have them) in the first place. That’s just the way it is.

And ultimately, even at Google’s enormous scale of users who use its services for free, there is something inherently wrong about honest users who lose so much of their lives — that Google has encouraged them to entrust to Google — when an unrecovered account lockout occurs.

Over and over again — in a manner reminiscent of the film “Groundhog Day” — desperate Google users who have been locked out have asked me if there was someone they could pay to help them? Isn’t there some way, they ask, for Google to do a deeper dive into the circumstances of their lockouts, the users’ official government IDs for proof, and other methods to authenticate them back into their Google accounts — as can be done at virtually all financial institutions and most other firms.

Right now the answer is no.

But the answer should be and could be yes, if Google made the decision — by no means a trivial one! — to provide the means for such “enhanced recovery services” for Google Accounts, which in some cases (e.g., when a user is indeed at fault as the root cause of the lockout) could be chargeable (that is, paid) services as a means to help defray the additional costs involved.

This is a very complicated area with an array of trade-offs and nuances. It’s likely to be highly controversial. 

But as far as I’m concerned, the status quo of how Google account recoveries work (or fail) is no longer acceptable, especially in the current regulatory and political environment.

In future discussions, I will detail my thinking of how “enhanced recovery” for Google accounts could be accomplished in practice, and how it would benefit Google’s users, Google itself, and the wider global community that depends upon Google.

Take care, all.

–Lauren–

UPDATE: 24 May 2023: A Proposal for "Enhanced Recovery Services" for Locked Out Google Accounts

– – –

All, I am doing something in this post that I’ve never done before over these many years. I’m going to share with you an example of what Google account recovery failure means to the people involved, and this is by no means the worst such case I’ve seen — not even close, unfortunately.

I mentioned yesterday in my other venues how (for many years) I’ve routinely tried to informally help people with Google account recovery issues, because the process can be so difficult for many persons to navigate, and frequently fails. The announcement yesterday of Google’s inactive account deletion policy that I blogged about then:

https://lauren.vortex.com/2023/05/16/google-inactive-accounts-deletion

triggered an onslaught of concerns that for a time made my blog inaccessible and even delayed inbound and outbound email processing.

I’m going to include below most of the text from messages I received today from one of my readers about a specific Google account recovery failure — and how that’s affecting a nearly 90-year-old woman. I’ll be anonymizing the message texts, and I’ve of course received permission from the sender to show you this.

Unfortunately, this example is all too familiar for me. It is very much typical of the Google account recovery problems that Google users, so dependent on Google in their daily lives, bring to my attention in the hope that I might be able to help.

I’ve been discussing these issues with Google for many years. I’ve suggested “ombudspeople”, account escalation and appeal procedures that ordinary people could understand, and many other concepts. They’ve all basically hit the brick wall of Google suggesting that at their scale, nothing can be done about such “edge” cases. I disagree. In today’s regulatory and political environment, these edge cases matter more than ever. And I will continue to do what I can, as ineffective as these efforts often turn out to be. -L

 – – – Message Text Begins – – –

Hi Lauren, I tried to help a lovely neighbor (the quintessential “little old lady”) recently with her attempt to recover her legacy gmail account. We ultimately gave up and she created a second, new account instead. She had been using the original account forever (15+ years) and it was created so long ago that she didn’t need to provide any “recovery” contacts at that time (or she may have used a landline phone number that’s long been cancelled now). For at least the last decade, she was just using the stored password to login and check her email. When her ancient iPad finally died, she tried to add the gmail account to her new replacement iPad. However, she couldn’t remember the password in order to login. Because the old device had changed and she couldn’t remember the password and there was no back channel recovery method for her account, there was no way to login. I don’t know if you’ve ever attempted to contact a human being at google tech support, but it’s pretty much impossible. They also don’t seem to have an exception mechanism for cases like this. So she had to abandon hopes of viewing the google photos of her (now deceased) beloved pet, her contacts, her email subscriptions, reminders, calendar entries, etc.

I understand the desire to keep accounts secure and the need to reduce customer support expenses for a free service with millions of users. But it’s also frustrating for end users when there’s no way to appeal/review/reconsider the automated lockout. She’s nearly 90 years old, so I find it remarkable that she’s able to use the iPad. But it’s difficult to know what to say to someone like this when she asks “what can we do now” and there are no options…

I recognize that there are many different kinds of google users. Some folks (like journalists, dissidents, whistleblowers, political candidates, human rights workers, etc.) need maximum security for their communications (and their contacts). In these cases, it makes sense to employ multifactor authentication, end-to-end encryption, one time passwords, and other exceptional privacy and security features. However, there are a great many average users who find these additional steps difficult, frustrating and (esp. in the case of elderly people who aren’t necessarily very technology savvy), sometimes bewildering. It’s tough to explain that your treasured photos can’t be retrieved because you’re not the sort of user that google had in mind. Not everyone is a millennial digital native who finds this all obvious.

 – – – Message Text Ends – – –

–Lauren–

UPDATE: 24 May 2023: A Proposal for "Enhanced Recovery Services" for Locked Out Google Accounts

UPDATE (17 May 2023): An Example of a Very Sad Google Account Recovery Failure — and How It Affects Real People

– – –

Google has announced that inactive personal Google accounts will be removed and all of their data deleted after two years, after a number of emailed reminders:

https://blog.google/technology/safety-security/updating-our-inactive-account-policies/

Right now I’m only going to thumbnail some potentially serious issues with this policy. They deserve a much more detailed examination that I will address when I can, but there are many associated concerns that Google did not address publicly, and these matter enormously because Google is so much a part of so many people’s lives around the planet.

– Will account names become available for reissuing after an account is deleted? Google policy historically has been that used account names are permanently barred from reissuing. I am assuming that this is still the case, but I’d appreciate confirmation. This would be the best policy from a security standpoint, of course.

UPDATE (17 May 2023): I’ve now received confirmation from Google that account names will not be reissued after these account deletions. Good.

– Given the many ways that users can lose access to their Google accounts, including password and other authentication confusion, lockouts in error due to location login issues, and many other possibilities related to authentication and account recovery complexities, I am not convinced that deleting user data after two years of inactivity is a wise policy. While keeping the data around forever is impractical, two years seems very short from a legal standpoint in an array of ways, even if routine user access is blocked after two years of inactivity. While many users locked out of their accounts simply create new accounts, many still have crucial data in those “trapped” accounts, and most users unfortunately do not use the “Takeout” facilities Google provides to download data while accounts are still active.

 – The impact on user photos and public YouTube videos are especially of concern. Many popular and important YouTube videos are associated with very old accounts that are likely effectively abandoned. The loss of these public videos from YouTube could be devastating.

UPDATE (17 May 2023): While their original announcement yesterday said that YouTube videos would be deleted when accounts were deleted under this policy, Google has responded to concerns about YouTube videos and has now made a statement that “At this time, we do not plan to delete accounts with YouTube videos.” Obviously this leaves some related open questions for the future, but is still great news.

– Many people use Google accounts for logging in to non-Google sites via federated login (“Login with Google”) mechanisms. While Google says these logins will continue to constitute activity, many of these accounts are likely fairly old and their associated users may not have used them for anything directly on Google for years (including reading emails). If they also have not been logging on to those third party sites for extended periods, when they do try again they’re likely to be quite upset to find their Google accounts necessary for access have been deleted.

I could go on but for now I just wanted to point out a few of the complex negative ramifications of Google’s policy in this regard, irrespective of their assertion that they’re meeting “industry standards” related to account retention and deletion. 

As it stands, I predict that a great many people are going to lose an enormous amount of data due to this Google policy — data that in many cases is very important to them, and in the case of YouTube, often important to the entire world.

–Lauren–

UPDATE (15 May 2023): And … about 48 hours after this original post, bookmarks starting successfully syncing in full to my tablet, after months of failing totally (despite my many best efforts and every sync trick I know). Coincidence? Could be. But I’ll say “Thanks Google!” anyway. 

– – – – – –

Greetings. Recently I asked around for suggestions to help figure out why (after trying all the obvious techniques) I could no longer get my Chrome bookmarks to sync to my primary Android 13 tablet.

Now, courtesy of a gracious #Mastodon user who pointed me at this recent article, I have the answer as to the why. But there’s no apparent fix. Bookmark sync is now broken for power users in significant ways:

https://www.androidpolice.com/google-chrome-bookmark-sync-limit/

In brief, Google appears to have imposed (either purposefully or not) an undocumented limit to the number of bookmarks permitted to be synced between devices. If you exceed that limit, NO bookmarks appear to usually sync — you can end up with no bookmarks at all on most affected devices.

In my case, my Android 13 phone is still syncing all bookmarks correctly, while my tablet has no bookmarks, and shows the “count limit exceeded” error in chrome://sync-internals that the above article notes.

The article suggests that the new undocumented limit is 100K for desktops and 20K for mobile devices. It turns out that I have just over 57K bookmarks currently, so why the limit is exceeded on the tablet and not on the phone is a mystery. But having ZERO synced bookmarks on the tablet is a real problem.

Yeah, there are third party bookmark managers and ways to create bookmark files that could be viewed statically, but the whole point of Chrome bookmark sync is keeping things up to date across all devices. This needs to work!

And if you feel that 57K bookmarks is a lot of bookmarks — you’re right. But I’ve been using Chrome since the first day of public availability, and my bookmarks are the road maps to my use of the Net. For them to just suddenly stop working this way on a key device is a significant problem.

I’d appreciate some official word from Google regarding what’s going on about this. Have they established new “secret” limits? Is this some sort of bug? (The error message suggests not.) Please let me know, Google. You know how to reach me. Thanks. 

–Lauren–

In several of my past recent posts:

The "AI Crisis": Who Is Responsible?
https://lauren.vortex.com/2023/04/09/the-ai-crisis-who-is-responsible

State and Federal Internet ID Age Requirements Are Hell-Bent on Turning the Internet Into a Chinese-Style Internet Nightmare
https://lauren.vortex.com/2023/03/23/government-internet-id-nightmare

Giving Creators and Websites Control Over Generative AI
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

and others in various venues, I have expressed concerns over the “perfect storm” that is now circling “Big Tech” from both sides of the political spectrum, with both Republicans and Democrats proposing (sometimes jointly, sometimes in completely opposing respects) “solutions” to various Internet-related issues — with some of these issues being real, and others being unrealistically hyped.

The latest flash point is AI — Artificial Intelligence — especially what’s called generative AI — publicly seen mainly as so-called AI chatbots.

I’m not going to repeat the specifics of my discussions on these various topics here, except in one respect.

For many (!) years I have asserted that these Big Tech firms (notably Google, but the others as well to one degree or another) have been negligently deficient in their public communications, failing to adequately assure that ordinary non-technical people — and the politicians that they elect — understand the true nature of these technologies.

This means both the positive and negative aspects of tech. But the important point is that the public needs to understand the reality of these systems, and not be misguided by misinformation and often politically-biased disinformation that fill the information vacuum left by these firms, often out of a misguided and self-destructive fear of so-called “Streisand Effects”, which the firms are afraid will occur if they mention these issues in any depth.

It is clear that such fears have done continuing damage to these firms over the years, while robust public communications and public education — not looking down at people, but helping them to understand! — could have instead done enormous good.

I’ve long called for the hiring of “ombudspersons” or liaisons — or whatever you want to call them — to fill these important, particular communications roles. These need to be dedicated roles for this purpose.

The situation has become so acute that it may now be necessary to have roles specific to AI-related public communications to help avoid the worst of the looming public relations and political catastrophes, that could decimate the positive aspects of these systems, and over time seriously damage the firms themselves.

But far more importantly, it’s society at large that will inevitably suffer when politics and fear win out over a true understanding of these technologies — how they actually impact our world in a range of ways — again, both positive and negative, both now and into the future.

The firms need to do this now. Right now. All of the greatest engineering in the world will not save them (and us!) if their abject public communications failures continue as they have to date.

–Lauren–

There is a sense of gathering crisis revolving around Artificial Intelligence today — not just AI itself but also the public’s and governments’ reactions to AI — particularly generative AI.

Personally, I find little blame (not zero, but relatively little) with the software engineers and associated persons who are actually theorizing, building, and training these systems.

I find much more blame — and the related central problem of the moment — with some non-engineers (e.g., some executives at key levels of firms) who appear to be pushing AI projects into public view and use prematurely, out of fear of losing a seemingly suddenly highly competitive race, in some cases apparently deemphasizing crucial ethical and real world impact considerations.

While this view is understandable in terms of human nature, that does not justify such actions, and I fear that governments’ reactions are heading toward a perfect storm of legislation and regulations that may be even more problematic than the premature release of these AI systems has been for these firms and the public. This may potentially set back for years critical work in AI that has the potential to bring great benefits (and yes, risks as well — these both come together with any new technology) to the world.

By and large the Big Tech firms working on AI are doing a negligent and ultimately self-destructive job at communicating the importance — and limitations! — of these systems to the public, leaving a vacuum to be filled with misinformation and disinformation to gladden the hearts of political opportunists (both on the Right and the Left) around the planet.

If this doesn’t start changing for the better immediately, today’s controversies about AI are likely to look like firecrackers compared with nuclear bombs in the future. 

–Lauren–

27-Mar-23

The new Utah Internet ID age laws signed today — and what other states and the feds are moving toward in the same realm — will destroy social media and much else of the Internet as we know it.

Vast numbers of people will refuse to participate in any government ID-based scheme for age verification, no matter how secure and compartmented it is claimed to be (e.g. through third-party verifiers).

Many persons, rightly concerned about basic privacy rights, already use different names and specify different birthdays on different sites, to avoid being subjected to horrific problems in the case of data breaches, and to avoid being tracked across sites discussing unrelated topics.

These government moves are clear steps on the way toward creating a Chinese-style Internet where every individual’s Internet usage is tracked and monitored by the government, creating a vast and continuous climate of fear, oppression, and government control.

–Lauren–

Seemingly overnight, the Internet is awash with controversies over Generative Artificial Intelligence (GAI) systems, and their potential positive and negative impacts on the Net and the world at large.

It also seems very clear that unless we (for once!) get ahead of the potential problems with this new technology that seem to be rushing toward us like a freight train, there could be some very tough times ahead for creators, websites, and ordinary Internet users around the world.

I’m not writing a tutorial here on GAI, but very briefly it’s not the kind of “backend” AI systems with which most of us are more familiar, used for research and modeling, sorting the order of search results and suggestions, and even the kinds of generally useful very brief “answers” we see as (for example) Google Knowledge Panels, featured snippets, or short Google Assistant answers (and the similar features of other firms’ products).

GAI is very different, because it creates (and this is a greatly simplified explanation) what appears to be (at least in theory) completely *new* content, based on its algorithms and the data on which it has been trained.

GAI can be applied to text, audio, imagery, video — pretty much everything we’ve come to associate with the Net. And already, serious problems are emerging — not necessarily unexpected at this early stage, but ones that we must start dealing with now or risk a maelstrom later.

GAI chatbots have been found to spew racist and other hateful garbage. The long-form answers and essays that are the stock-in-trade of many GAI systems can be beautifully written, appear knowledgeable and authoritative — but still be riddled with utterly incorrect information. This can be a hassle indeed even with purely technical articles that have had to be withdrawn as a result, but can get downright scary when they involve, as in one recent case, an article on men’s health issues.

There are more problems. GAI can easily create “fake” pornography targeting individuals. It can be used to simulate people’s voices for a range of nefarious purposes — or even potentially just to simulate the voices of professional voice actors without their permission.

Eventually, the kind of scenario imagined in the 1981 film “Looker” — where actors once scanned could be completely emulated by (what we’d now call) GAI systems — could actually come to pass. We’re getting quite close to this already in the film industry and the world of so-called deepfakes — the latter potentially carrying enormous risks for disinformation and political abuse.

All of this tends to point us mainly in one direction: How GAI is trained.

In many cases, the answer is that websites are crawled and their data used for GAI purposes, without the explicit permission of the creators of that data or the sites hosting it.

Since the beginning of Search on the Internet, there has been something of a largely unwritten agreement. To wit: Search engines spider and index sites to provide lists of search results to users, and in return those search engines refer users back to those original sites where they can get more information and find other associated content of interest.

GAI in Search runs the risk of disrupting this model in major ways. Because by presenting what appear to be largely original long-form essays and detailed answers to user search queries, the probability of users ever visiting those sites that (often unknowingly) provided the GAI training data, even when links are present, is likely to drop precipitously. Even with links back provided by the GAI answers, why are users going to bother visiting those sites that provided the data to the GAIs, if the GAIs have already completely answered those users’ questions?

Complicating this even further is that the outputs of some GAI systems appear to frequently include largely or even completely intact (or slightly reworded) stretches of text, elements of imagery, and other data that the GAI presents as if they were wholly original.

Creators and websites should be able to choose if and how they wish their data to be incorporated into GAI systems. 

Accomplishing this will be a complex undertaking, likely involving both technical and legislative aspects in order to be even reasonably effective, and will almost certainly always be a moving target as GAI systems advance.

But a logical starting point could be expansion of the existing Internet Robots Exclusion Protocol (REP — e.g. robots.txt, meta tags, etc.) currently used to express website preferences regarding search indexing and associated functions. While the REP is not universally adhered to today, major sites usually do follow these directives.

Indeed, even defining GAI-related directives for REP will be enormously challenging, but this could get the ball rolling at least.

We need to immediately start the process of formulating the control methodologies for what training data Generative Artificial Intelligence systems are permitted to use, and the manners in which they do so. Failure to begin considering these issues risks enormous backlash against these systems going forward, which could render many of their potential benefits moot, to the detriment of everyone.

–Lauren–

Greetings. The last hours and minutes of 2022 are ticking off, and we’re all being drawn inexorably into the new year and even deeper into the 21st century.

In my previous post of early October — Social Media Is Probably Doomed — I discussed various issues that call into question the ability of social media as we’ve known it to continue for much longer. Since then we’ve seen the massive chaos at Twitter when Musk took over, the rapid rise of distributed social media ecosystem Mastodon, and an array of other new confounding factors that make this analysis notably more complex and less deterministic. 

It’s perhaps interesting to note that only a year ago, pretty much nobody had predicted that Elon Musk would — voluntarily, single-mindedly, and over such a short period of time — have reinvented himself as a pariah to a large segment of his customers and the public at large, and be in a position to remake Twitter in the image of the very worst that social media can offer.

The lessons that we can draw from this are many, beyond the obvious ones such as that dramatic, abrupt changes in the tech world — and broader society — should be considered more the norm than the exception, especially in our current toxic political environment.

And it’s important to note that no technology — nor the persons who develop, deploy, operate, or use it — is immune from such disruptions.

This includes Mastodon of course. And while the distributed nature of this ecosystem perhaps provides some additional buffering from sudden changes that more centralized services usually do without, that does not suggest invulnerability to many of the same kinds of problems plaguing other social media, despite best intentions.

And this is definitely not to assert that blindly attempting to resist changes is the proper course. In fact, *not* being willing to appropriately evolve with a massive growth in the quantity of users — especially as increasingly more nontechnically-oriented persons arrive — is likely lethal to a social media ecosystem in the long run.

As we stand on the cusp of 2023, there is immense potential in Mastodon and other distributed social media models. But there are also enormous risks — fear of change being among the most prominent and potentially negatively impactful of these.

Given all that’s happening, I suspect that this coming year will be a crucial turning point for social medial in many ways — both technical and nontechnical in scope.

We can try to hold back the winds of change in these regards, or we can endeavor to harness them for the good of all. That, my friends, is not the choice of technology itself, it is solely up to us.

All the best to you and yours for a great 2023. Happy New Year!

–Lauren–

Social Media Is Probably Doomed [ 05-Oct-22 2:00am ]

UPDATE (31 December 2022): 2023 and Social Media's Winds of Change

Social media as we’ve known it is probably doomed. Whether a decline in social media would on balance be good or bad for society I’ll leave to another discussion, but the handwriting is on the wall for a major decline in social media overall.

As with most predictions, the timing and other details will surface in coming months and years, but the overall shape of things to come is not terribly difficult to visualize.

The fundamental problem is also clear enough. A vast range of entities at state, federal, and international levels are in the process of enacting, invoking, or otherwise planning a range of regulatory and other legal mandates that would apply to social media firms — with many of these requirements being in direct and total opposition to each other.

The most likely outcome from putting these firms “between a rock and hard place” will be a drastic reduction of social media services provided, resulting in a massive decrease in ordinary persons’ ability to communicate publicly, rather than the increase that various social media critics have been anticipating.

Let’s very briefly review just some of the factors in the mix:

The political Right in the U.S. generally wants public postings to stay up, even if they contain racist or other hate speech or misinformation/disinformation. This is the outline of the push from states like Texas and Florida. Meanwhile, the Left and other states like California want more of the same sort of postings taken down even faster than they are now. Unless you can somehow provide different feeds on a posting by posting basis to users in different states (and what of VPN usage from other areas?), this creates an impossible situation.

Both the Left and Right hate Section 230, but for opposite reasons, relating to my point just above. Even the Biden White House has this wrong, arguing that cutting back 230 protections would force social media firms to more tightly moderate content, when in reality tampering with 230 would make hosting most UGC (User Generated Content) far too risky.

Elon Musk has proposed that Twitter carry any postings that aren’t explicitly illegal or condoning violence. This suggests an increase in the kind of hate speech and disinformation that not only drives away many users, but also tends to cause enormous problems for potential advertisers and network infrastructure providers, who usually do not want to be associated with such materials. And then of course there’s the EU — which has its own requirements (much more robust than in the U.S.) for dealing with hate speech and misinformation/disinformation.

There are calls to strip Internet users of all anonymity, to require use of real names (tied to official IDs, perhaps through some third party mechanisms) based on the theory that this would reduce hate speech and other attack speech. Yet studies have shown that such abhorrent speech continues to flower even when real names are used, while forcing real names causes already marginalized persons and groups to be even further disadvantaged, often in dangerous ways. Is there a middle ground on this? Perhaps requiring IDs be known to a third party (in case of abuse) before posting to large numbers of persons is permitted, but still permitting the use of pseudonyms for those postings? Maybe, but it seems like a long shot. 

Concerns over posting of terrorist content, live streaming of shootings, and other nightmarish postings have increased calls for pre-moderation of content before it goes public. But at the massive scale of the large social media firms, it’s impossible to see how this could be practical, for a whole range of reasons, unless the amount of content permitted from the public were drastically reduced.

And this is just a partial list. 

For social media to have any real value and practicality, it can’t operate on a reasonable basis when every state, every country may demand a different and conflicting set of rules. While there are certainly some politicians and leaders who do understand these issues in considerable depth, many others don’t worry about whether their technical demands are practical or what the collateral damage would be, only whether they’re good for votes come the next election.

And now we reach that part of this little essay where I’m expected to announce my preferred solution to this set of problems. Well dear readers, I’ve got nothing for you. I don’t see any practical solutions for these dilemmas. The issues are in direct conflict and opposition, and there is no obvious route toward their reconciliation or harmonization. 

So I can do little more here than push the needle into the red zone, sound the storm warnings, and try to point out that the paths we’re taking — absent some almost unimaginable changes in the current patterns — are rocketing us rapidly toward a world of social media that will likely briefly flare brightly and then go dark, like an incandescent light bulb at the end of its life, turned on just one too many times.

This analogy isn’t perfect of course, and there will continue to be some forms of social media under any circumstances. But the expected experience seems most likely to become increasingly constrained over time, along with all other aspects of publicly accessible user-provided materials — the incredible shrinking content.

As I said earlier, nobody knows how long this process will take. It won’t happen overnight. But we’ll have taken the path into this wilderness of our own free will, eyes wide open.

Please don’t forget to turn off the lights on your way out.

–Lauren–

UPDATE (25 January 2023): Google has announced that it will terminate this program at the end of this month (31 January 2023).

– – – – – –

Recently in Google's Horrible Plan to Flood Your Gmail with Political Garbage I discussed Google’s plan to permit “official” political emails to bypass Gmail spam filters, with users able to opt-out from this bypass only on a sender-by-sender basis as political emails arrive. So as new “official” political senders proliferate, this will be a continuing unwanted exercise for most Gmail users.

The Federal Election Commission has now posted a draft decision that effectively gives Google a go ahead for this plan (UPDATE: 11 August 2022: The FEC has now officially approved the plan). The large number of comments received by the FEC regarding this proposal were overwhelmingly negative (it was difficult to find any positive comments at all), but the FEC is only ruling on the technical question of whether such a plan would represent prohibited in-kind political contributions.

My view is that Gmail users should be able to opt-out of this entire political spam bypass plan if that is their choice. Political emails would in that case continue going into those individual users’ spam folders to the same extent that they do now.

My specific recommendation:

The first time that a political email arrives for a Gmail user that would bypass spam filtering under the Google plan, the Gmail user would be presented with a modal query with words to this effect (and yes, wording this properly will be nontrivial):

Do you want official political emails to arrive in your Gmail inbox rather than any of them going to your spam folder, unless you indicate otherwise regarding specific political email senders? You can change this choice at any time in Gmail Settings.
(TELL ME MORE)
YES
NO

There is no “default” answer to this query. Users must choose either YES or NO to proceed (with the TELL ME MORE choice branching off to an explanatory help page).

This is a matter of showing respect to Gmail users. The political parties do not own Gmail users’ inboxes, but users who are concerned about missing political emails that might otherwise go to the spam folder would be able to participate in this program, while other users would not be forced into participation against their wills.

Of course this will not satisfy some politicians who incorrectly assume that so much political email ends up in spam due to a claimed political bias against them by Google. In fact, Google applies no political bias at all to Gmail — so much political email ends up in spam precisely because that’s where most Gmail users want it to be.

Google is between the proverbial rock and a hard place on this matter, but I’m asking Google to side with their users. I’d prefer that the Gmail political spam bypass plan not be deployed at all, but if it’s going to happen than let’s give Google’s users a choice to participate or not, right up front.

It’s the Googley thing to do.

–Lauren–

UPDATE (25 January 2023): Google has announced that it will terminate this program at the end of this month (31 January 2023).

UPDATE (11 August 2022): The Federal Election Commission has now officially approved this Google plan.

UPDATE (3 August 2022): How to Fix Google's Gmail Political Spam Bypass Plan

UPDATE (3 August 2022): A Federal Election Commission Draft APPROVES this plan. See: https://www.fec.gov/files/legal/aos/2022-14/202214.pdf

UPDATE (19 July 2022): Public comments on this proposal can now be viewed here on the Federal Election Commission site.

UPDATE (14 July 2022): The Federal Election Commission today extended the public comment period for this issue from a deadline of July 16 to a new ending date of August 5th. I have updated this post accordingly.

– – – – – –

Google is backed into a corner, and Google’s attempt to get out of this corner could be very bad for Gmail users. You have just a few weeks remaining to make your opinion known about this. Please read on.

While Google studiously avoids political bias, the GOP has been bitching for ages with the ludicrous claim that Google is purposely directing GOP political emails into Gmail users’ spam folders. The GOP asserts that Google directs more political emails from Republicans than from Democrats into the spam jail, and that this is because (the GOP claims) Google hates Republicans. 

Not true. The reason more GOP political emails end up in spam is that spam is exactly where most Gmail users want those emails to be.

While both Democrats and Republicans are guilty of sending unwanted, unsolicited political emails, the fact is that Republicans send more in quantity, and they tend to be more insidious, including traps like automatic recurring payments after supposedly one-time donations, and claims (like repeating Trump’s Big Lie about the 2020 election) that are misleading at best and often ludicrous and dangerous. This crap deserves to be in spam.

In an attempt to get out from under what are mostly GOP complaints, Google has asked the Federal Election Commission for approval for a plan to make emails from authorized candidate committees, political party committees and leadership political action committees registered with the FEC exempt from spam detection, as long they abide by Gmail’s rules on phishing, malware and illegal content.

There’s stuff in there about notifying users the first time that they get one of these emails from a campaign so that they can (supposedly) opt-out and other details. It doesn’t matter. This plan will bury many Gmail users under a mountain of stinking swill. 

Google’s plan will never work, for a couple of reasons.

One is that campaign and other political mailings multiply and spread like a hideous plague. I’ve had the unpleasant experience of helping a Gmail user clean up the mess created when they subscribed to a single political website, in this case, yes, a Trump site that later was found to be soliciting funds for one purpose but actually using them for something else entirely. Big surprise, huh? 

In almost no time at all, this had metastasized into political mailings from affiliated groups spouting lies and begging for money, mixed in with all manner of political-appearing phishing attempts and other scams. These were showing up in his Gmail literally every few minutes. An utter nightmare. This doesn’t happen only with that GOP — though they’re the larger culprit in this saga.

The second reason that the Google plan will fail is that it will never satisfy the GOP. They’ve already proposed legislation that would make it illegal to send political email into spam. They want you to see all of it, every single word, whether you want to see it or not, whether you ever asked to see it or not.

The bottom line is that the Google plan will result in your Gmail inbox being flooded with unsolicited political garbage, that you’ll need to sort through and try (good luck!) to unsubscribe to. Whether you’re a Democrat, a Republican, an Independent, or something else entirely, this probably isn’t how you really want to be spending your days.

Again, I realize that Google has been unfairly forced into this position, but that can’t and doesn’t give this plan a pass.

The Federal Election Commission is now allowing for public comments until August 5th regarding this terrible idea. You can email your comments to:

ao@fec.gov

Please note that such emails may become part of the publicly inspectable public record related to this issue.

It’s been many years since I’ve seen a worse proposal related to email spam, and it’s very unfortunate that Google has been forced into this situation. But that’s where we are, so speak now or forever hold your peace.

–Lauren–

In my very recent post:

“Internet Users’ Safety in a Post-Roe World”

I expressed concerns regarding how Internet and telecommunications firms would protect women’s and others’ data in a post-Roe v. Wade world of anti-abortion states’ health data demands.

Google has now briefly blogged about this, at:

“Protecting people’s privacy on health topics”

The most notable part of the Google post is the announcement of this important change:

“Location History is a Google account setting that is off by default, and for those that turn it on, we provide simple controls like auto-delete so users can easily delete parts, or all, of their data at any time. Some of the places people visit — including medical facilities like counseling centers, domestic violence shelters, abortion clinics, fertility centers, addiction treatment facilities, weight loss clinics, cosmetic surgery clinics, and others — can be particularly personal. Today, we’re announcing that if our systems identify that someone has visited one of these places, we will delete these entries from Location History soon after they visit. This change will take effect in the coming weeks.”

I definitely endorse this change, which aligns with the suggestions in my above referenced blog post regarding handling of sensitive location data. Thank you Google for taking this crucial action. This is an excellent start.

However, not yet publicly addressed by Google are the issues I noted regarding how these sensitive topics in search histories (both as stored by Google itself and/or on browsers) could also be abused by anti-abortion states hell-bent on pursuing women and others as part of those states’ extremist agendas, including in many instances abortion bans without exceptions for rape and incest.

Again, I praise Google for their initial step regarding location data, but there’s much more work still to do!

–Lauren–

Greetings. I write the following with no joy whatsoever.

I have reluctantly come to the conclusion that it may be necessary to legislate that any social media user who wishes to have their posts seen by more than a small handful of users will need to be authenticated by any (significantly-sized) sites, using government IDs.

This identification information would be retained by the firms so long as the users are active and for some specified period afterwards. Users would *not* be required to use their real names for posts, but the linkages to their actual IDs would be available to authorities in cases of abuse under appropriate, precisely defined circumstances, subject to court oversight. 

This would include situations where a post may be forwarded to larger audiences by others, which will be a technical challenge to implement.

The ability to reach large audiences on today’s Internet should be a privilege, no longer a right.

It is very sad that it has come to this.

–Lauren–

UPDATE (1 July 2022): My Thoughts About Google's New Blog Post Regarding Health-Related Data Privacy

UPDATE (24 June 2022): As expected, the U.S. Supreme Court today overturned Roe v. Wade, bringing the issues discussed below into immediate focus.

TL;DR: By no later than early this July, it is highly probable that a nearly half-century nationwide precedent providing women with abortion-related protections will be partly or completely reversed by the current U.S. Supreme Court (SCOTUS). This sea change, especially impacting women’s rights but with even broader implications now and into the future, would immediately and dramatically affect many policy and operational aspects of numerous important Internet firms. Unless effective planning for this situation takes place imminently, the safety of women, the well-being of Internet users more generally, and crucial services of these firms themselves will in all likelihood be at risk in critical respects.

– – – – – –

Since the recent leak of a SCOTUS draft decision that would effectively eliminate the national protections of Roe v. Wade, and subsequent remarks by some of the associated justices, it is now widely assumed that within a matter of days or weeks a partial or total reversal of Roe will revert the vast majority of abortion-related matters back to the individual states. 

Many politicians and states have already indicated their plans to immediately ban most or even all abortions, including in some cases those related to rape and incest, and even those to preserve the health of the woman, with only narrow exceptions even to save mothers’ lives. Some of these laws may effectively criminalize miscarriages. Some may introduce both civil and criminal penalties related to abortion, possibly bringing homicide or murder charges against involved parties, potentially including the pregnant women. 

Various states plan to try extending their bans and civil/criminal penalties to include anyone who “participates” in making abortions possible, even if they are in other states, as when a woman travels to a different state for an abortion (the legality of one state attempting to impact actions in another state in this manner is unclear, but with today’s SCOTUS no possibilities can be safely ignored). Actions by some states to try ban obtaining, ordering, or providing various abortion drugs are also already being enacted. Note that SCOTUS has to date permitted to continue the Texas mechanism for suing abortion providers, which has largely blocked abortions in that state.

“Trigger laws” already in place in some states along with the statements of state legislators indicate that near total or total abortion bans will immediately become law in various states if the anticipated SCOTUS decision is announced. 

Anti-abortion and affiliated factions are already planning — using the reasoning of the expected SCOTUS decision as a foundation — for follow-up actions pushing for national abortion bans, limits on contraception, banning gay marriage, rolling back LGBTQ+ rights, and related activities. U.S. Senate Republican Leader Mitch McConnell has recently proclaimed that a nationwide abortion ban is possible if the GOP retakes the House, Senate, and presidency. 

These events are creating what could become an existential threat to many Internet users and to key aspects of many Internet firms’ policy and operational models.

Given the sweeping and unprecedented scope of the oppressive laws that would be unleashed on pregnant women and anyone else who becomes involved with their healthcare, especially given the civil and even criminal penalties being written into these laws, it seems inevitable that demands for access to data in the possession of many Internet and telecommunications firms relating to user activities will drastically increase.

Search histories (both server and browser) and potentially even stored email data could be sought looking for queries about abortion services, abortion drugs, and numerous other related topics. Location data (both targeting specific users, and data from broader geofence warrants associated with, for example, abortion providers) could be demanded. A range of other resulting data demands are also highly probable. It is also expected that there would be even more calls for government-mandated backdoors into end-to-end encrypted messaging systems.

Women may put their health and lives at risk by not seeking necessary health services, for fear of these abortion laws. Women’s partners, other family members, friends, associates, and healthcare providers may reasonably believe that their livelihoods or freedom may compromised if they are found to be providing or aiding in any manner related to abortion services. 

Many users may cease using Internet and various telecommunications services in the manners that they previously would have, out of concerns that their related activities and other data could ultimately fall into the hands of state or other officials, and then be used to track and potentially prosecute them under these abortion-related laws.

This situation is a Trust & Safety emergency of the first order for all of these firms.

While some firms already provide users a range of search/location history control tools, I would assert that most users do not understand them and are frequently unaware of how they are actually configured.

I believe that the best mechanism at this time to help protect women and affiliated others who would be victimized by these state actions is to not save the associated data in the first place, unless a user decides that they desire to have that data saved.

One possibility would be for these firms to proactively offer users the option to not save (or alternatively, very quickly expunge) their search, location, and other user activity data associated with abortion and important related issues — both on company servers, and within browser histories if practicable. Users who wished to have any of these categories of data activity saved as before could choose not to exercise this option.

Unfortunately, a database of users who opt out of having this data saved may itself be an attractive data demand target by parties who may assume that it mainly represents individuals attempting to hide activities related to abortions. This possibility may argue for the preferred default behavior being to not save this data, and offering users the option of saving it if they so choose.

While these changes could be part of a desirable broader effort to give users more control over which specific aspects of their “personally sensitive” activity data are saved, this would of course be a significantly larger project, and time is of the essence given the imminent SCOTUS ruling. 

Obviously I am not here addressing the detailed legal considerations or potential technical implementation challenges of the proposals above, and there may exist other ways to quickly ameliorate the risks that I’ve described, though practical alternatives are not obvious to me at present.

However, I do feel strongly that the status quo regarding user activity data in a post-Roe environment could create a nightmarish situation for many women and other Internet users, and be extraordinarily challenging for firms from Trust & Safety and broader policy and operational aspects. 

I strongly recommend that actions be taken immediately to protect Internet users from the storm that will likely arrive very shortly indeed.

–Lauren–

It seems like only a few years ago, the entire world was enamored of Big Tech and the Internet — and pretty much everyone was trying to emulate their most successful players. But now, to watch the news reports and listen to the politicians, the Internet and Big Tech are Our Enemies, responsible for everything from mass shootings to drug addiction, from depression to child abuse, and seemingly most other ills that any particular onlooker finds of concern in our modern world.

The truth is much more complex, and much more difficult to comfortably accept. For the fundamental problems we now face are not the fault of technology in any form, they are fully the responsibility of human beings. That is, as Pogo famously said, “We have met the enemy, and he is us.”

What’s more, most users of social media and other Internet services don’t realize how much they have to lose as a result of the often politically motivated faux “solutions” being proposed (and in some cases already passed into law) that could literally cripple many of the sites that billions of us have come to depend upon in our daily lives.

Hate speech, for example, was not invented by the Internet. While it can certainly be argued that social media increased its distribution, the intractable nature of the problem is clearly demonstrated by calls from the Right to leave most hate speech available as legal speech (at least in the U.S. — other countries have different legal standards regarding speech), while the Left (and many other countries) want hate speech removed even more rapidly. Both sides propose draconian penalties for failures to comply with their completely opposite demands.

In the U.S., some states have already passed laws explicitly prohibiting Big Tech from removing wide ranges of speech, much of which would be considered hateful and/or outright disinformation. These laws are currently unenforced due to court actions, but not on a permanent basis at this time.

The utter chaos that would be triggered by enforcement of such laws and associated attempts to undermine crucial Communications Decency Act Section 230 are obvious. If firms are required by law not to remove speech that they consider to be dangerous misinformation or hate speech, they will almost certainly find themselves cut off from key service providers that they need to stay in operation, who won’t want to keep doing business with them. Perhaps laws would then be passed to try require that those providers not cut off social media firms in such cases. But what of advertisers who do not wish to be associated with vile content? Laws to force them to continue advertising on particular sites are unlikely in the extreme.

Similar dilemmas apply to most other areas of Big Tech and the Internet that are now the subject of seemingly endless condemnation. There are calls for end-to-end encryption of chat systems and other direct messaging to protect private conversations from outside surveillance and tampering — but there are simultaneously demands that governments be able to see into these conversations to try detect child abuse or possible mass shooter events before they occur. Another enormous category of conflicting demands will arise as the U.S. Supreme Court drastically scales back fundamental protections for women.

Even if encryption were banned (a ban that we know would never be anywhere near 100% effective), the sheer scale of the Internet in general, and of social media in particular, are such that no currently imaginable combination of human beings and artificial intelligence could usefully scan and differentiate false positives from genuine threats among the nearly inconceivably enormous volumes of data involved. False positives have real costs — they divert scarce resources from genuine threats where those resources are desperately needed.

Big Tech now finds itself firmly between the proverbial rock and the hard place. Governments, politicians, and others are demanding changes that in many cases aren’t only in 180 degree opposition (“Take down violating posts faster! No, leave them up — taking them down is censorship!”), but are also calling for technologically impractical approaches to monitoring social media (both public postings and private messages/chats) at scale. Many of these demands would lead inevitably to requiring virtually all social media posts to be pre-moderated and pre-approved before being permitted to be seen publicly. Every public post. Every private chat. Every live stream throughout the totality of its existence.

Only in such or similar ways could social media firms meet the demands being strewn upon it, even if the inherent conflicts in demands from different groups and political factions could somehow be harmonized, even leaving aside associated privacy concerns.

But this is actually entirely academic at the kinds of scales at which users currently post to social media. Such pre-moderation is not possible in any kind of effective way without drastically reducing the total volume of user content that is made available.

This would leave Big Tech with only one likely practical path forward. Firms would need to drastically and dramatically reduce the amount of UGC (User Generated Content) that is submitted and publicly posted. All manner of postings — written, video, audio, prerecorded content and live streams, virtually everything that any user might want other users to see, would need to be curtailed. A tiny percentage compared with what is seen today might continue to be publicly surfaced after the required pre-moderation, but this would be a desert ghost town compared to today’s social media landscape.

There are some observers who upon reading this might think to themselves, “So what? To hell with social media! The Internet and the world will be better without it.” But this is fundamentally wrong. The ability of ordinary people to communicate with many others — without having to channel through traditional mass media gatekeepers — has been one of the most essential liberating aspects of the Internet. The appropriate responses to the abusive ways that some persons have chosen to use these capabilities do not include permitting governments to decimate a crucial aspect of the Internet’s empowerment of individuals.

Ultimately might governments expand their monitoring edicts to include email? Will attempts to ban VPNs become mainstream around the planet? There’s no reason to assume that governments demanding mass data surveillance would ultimately hesitate in any of these respects.

Of course, if this is what voters really want, it’s what their politicians will likely provide them. Possible alternatives that might help to limit some abuses — one suggestion at least worth discussing is requiring social media firms to confirm the identities of users posting to large groups before such postings are visible — may not be seriously considered. We shall see.

Unfortunately, most users of the Internet and social media are ill-informed about the realities of these situations. Most of what they are seeing on these topics is political rhetoric devoid of crucial technological contexts. They are purposely kept uninformed regarding the ramifications of the false “remedies” that some politicians and haters of Big Tech are spewing forth daily.

We are on the cusp of having major parts of our daily lives seriously disrupted by political demands that would wither away many of the services on the very sites that are so important to us all.

–Lauren–

 
News Feeds

Environment
Blog | Carbon Commentary
Carbon Brief
Cassandra's legacy
CleanTechnica
Climate and Economy
Climate Change - Medium
Climate Denial Crock of the Week
Collapse 2050
Collapse of Civilization
Collapse of Industrial Civilization
connEVted
DeSmogBlog
Do the Math
Environment + Energy – The Conversation
Environment news, comment and analysis from the Guardian | theguardian.com
George Monbiot | The Guardian
HotWhopper
how to save the world
kevinanderson.info
Latest Items from TreeHugger
Nature Bats Last
Our Finite World
Peak Energy & Resources, Climate Change, and the Preservation of Knowledge
Ration The Future
resilience
The Archdruid Report
The Breakthrough Institute Full Site RSS
THE CLUB OF ROME (www.clubofrome.org)
Watching the World Go Bye

Health
Coronavirus (COVID-19) – UK Health Security Agency
Health & wellbeing | The Guardian
Seeing The Forest for the Trees: Covid Weekly Update

Motorcycles & Bicycles
Bicycle Design
Bike EXIF
Crash.Net British Superbikes Newsfeed
Crash.Net MotoGP Newsfeed
Crash.Net World Superbikes Newsfeed
Cycle EXIF Update
Electric Race News
electricmotorcycles.news
MotoMatters
Planet Japan Blog
Race19
Roadracingworld.com
rohorn
The Bus Stops Here: A Safer Oxford Street for Everyone
WORLDSBK.COM | NEWS

Music
A Strangely Isolated Place
An Idiot's Guide to Dreaming
Blackdown
blissblog
Caught by the River
Drowned In Sound // Feed
Dummy Magazine
Energy Flash
Features and Columns - Pitchfork
GORILLA VS. BEAR
hawgblawg
Headphone Commute
History is made at night
Include Me Out
INVERTED AUDIO
leaving earth
Music For Beings
Musings of a socialist Japanologist
OOUKFunkyOO
PANTHEON
RETROMANIA
ReynoldsRetro
Rouge's Foam
self-titled
Soundspace
THE FANTASTIC HOPE
The Quietus | All Articles
The Wire: News
Uploads by OOUKFunkyOO

News
Engadget RSS Feed
Slashdot
Techdirt.
The Canary
The Intercept
The Next Web
The Register

Weblogs
...and what will be left of them?
32767
A List Apart: The Full Feed
ART WHORE
As Easy As Riding A Bike
Bike Shed Motorcycle Club - Features
Bikini State
BlackPlayer
Boing Boing
booktwo.org
BruceS
Bylines Network Gazette
Charlie's Diary
Chocablog
Cocktails | The Guardian
Cool Tools
Craig Murray
CTC - the national cycling charity
diamond geezer
Doc Searls Weblog
East Anglia Bylines
faces on posters too many choices
Freedom to Tinker
How to Survive the Broligarchy
i b i k e l o n d o n
inessential.com
Innovation Cloud
Interconnected
Island of Terror
IT
Joi Ito's Web
Lauren Weinstein's Blog
Lighthouse
London Cycling Campaign
MAKE
Mondo 2000
mystic bourgeoisie
New Humanist Articles and Posts
No Moods, Ads or Cutesy Fucking Icons (Re-reloaded)
Overweening Generalist
Paleofuture
PUNCH
Putting the life back in science fiction
Radar
RAWIllumination.net
renstravelmusings
Rudy's Blog
Scarfolk Council
Scripting News
Smart Mobs
Spelling Mistakes Cost Lives
Spitalfields Life
Stories by Bruce Sterling on Medium
TechCrunch
Terence Eden's Blog
The Early Days of a Better Nation
the hauntological society
The Long Now Blog
The New Aesthetic
The Public Domain Review
The Spirits
Two-Bit History
up close and personal
wilsonbrothers.co.uk
Wolf in Living Room
xkcd.com