Breaking Christian News

Mea Culpa: Google Is Even More Evil Than I Thought

Stephen Green-Opinion : Mar 6, 2024
PJ Media

As it turns out, Google's Gemini team went to extra-special lengths to make the image generator as Woke as humanly possible, doing everything within their digital powers to lecture users, scold users, to erase white people—particularly white men—from history.

[PJMedia.com] ...Google Gemini faceplanted so hard, that instead of pointing derisively and letting loose with his patented "Ha! Ha!" Nelson Muntz reached down to help Google up off the pavement. (Image: Pixabay)

And yet I did come partly to the company's defense—and not because my dark little heart grew three sizes that day or any mawkish garbage like that. "Google CEO Sundar Pichai wasn't sitting at his computer in the days before the image function went live," I wrote two weeks ago, "laughing, 'MUAHAHAHAHAHA! At last, I have perfected the No White Men algorithm!'"

Google Gemini, I concluded, wasn't doing anything more sinister than reflecting the Woke bias already garnishing every one of Google's crap sandwiches.

Mea culpa. Mea maxima culpa.

As it turns out, Google's Gemini team went to extra-special lengths to make the image generator as Woke as humanly possible, doing everything within their digital powers to lecture users, scold users, to erase white people—particularly white men—from history.

Mike Solana at Pirate Wires detailed exactly what happened earlier this week, taking readers "inside the DEI hivemind that led to Gemini's disaster." Here's the convoluted process that went on inside Gemini's black box before Google pulled the plug:

A user makes a request for an image in the chat interface, which Gemini—once it realizes it's being asked for a picture—sends on to a smaller LLM that exists specifically for rewriting prompts in keeping with the company's thorough "diversity" mandates. This smaller LLM is trained with LoRA on synthetic data generated by another (third) LLM that uses Google's full, pages-long diversity "preamble." The second LLM then rephrases the question (say, "show me an auto mechanic" becomes "show me an Asian auto mechanic in overalls laughing, an African American female auto mechanic holding a wrench, a Native American auto mechanic with a hard hat" etc.), and sends it on to the diffusion model.

Gemini's backend went through the same process no matter what the prompt was, which was why none of the Vikings were white, the Nazis were black women, etc.

Solana said to a Google engineer that it looked to him that "in a way [diversity] is the product"... Subscribe for free to Breaking Christian News here

Read this article in its entirety Here.