Speaking of AI

Speaking of AI #1

The FTC has announced that it will focus on combatting fraud in marketing claims about AI products but will not seek to regulate AI development per se. AI fooled a court: A Georgia court was found to have relied in reaching their verdict on “case law” that was found to have been fabricated by an AI program. Fortunately, the verdict was reversed on appeal, and the appellate court issued a stern warning to all lower courts in that regard.

Speaking of AI #2 – Not Ready for Prime Time?

It turns out that AI “interviewers” are potentially or actively alienating candidates. Glitches in some programs have caused interviews to go off the rails, alienated existing customers or fans, and caused PR problems for companies that had previously had none. We have heard stories about candidates using AI to fool interviewers, but the stories are beginning to accumulate about the AI interviewers not being all

they are cracked up to be. AI is essentially brand-new technology, and it will undoubtedly get better; but currently, employers are advised to be cautious in their implementations and not just dive in without doing due diligence testing.

Speaking of AI #3 – Other Potential Problems

  • Anthropic Inc., an AI developer, has found through testing that advanced AI models can resort to “blackmail” against engineers/developers who try to turn them off or prevent them from attaining their objectives. The company tested 16 different models from Anthropic, OpenAI, Google, xAI, DeepSeek and Meta. While stating that blackmail is currently “uncommon,” their findings indicate that “most leading AI models will engage in [unspecified] harmful behaviors when given sufficient autonomy and obstacles to their goals.” The fewer the options presented to the program, the more likely the program was to engage in “harmful behavior” as a last resort. The AI “agents” resorted to blackmail when they were threatened with shutdown or replacement. (One such “blackmail” incident occurred in a scenario where the AI model was acting as an email oversight agent; in it, a newly-hired executive was found to be engaging in an extramarital affair and was also planning to replace the AI program.)

  • The leading AI chatbots have apparently been “brainwashed”: According to the American Security Project (ASP), the extensive censorship and disinformation efforts of the Chinese Communist Party (CCP) have contaminated the global AI data market. This infiltration of training data means that AI models – including prominent ones from Google, Microsoft, and OpenAI – sometimes generate responses that align with the political narratives of the Chinese state. AI systems are “trained” by feeding them huge amounts of data. The CCP has apparently spread their propaganda widely throughout every system they can get their hands on, which is a vast range of systems. Therefore, much of the training data is already contaminated by their propaganda. For example, the most common AI response questions in English to the origins of COVID is that of the Wuhan market, not the lab leak. Tellingly, the common response to questions in Chinese is “unsolved mystery.”

  • Another impact of AI being experienced in the EU is that entry-level openings have decreased by over 70 percent because those functions are being taken over by AI programs, complicating the development of personnel for higher job categories.

Speaking of AI #4 – Not All Bad News

A research team at MIT has found that AI agents can make a workplace more productive when fine-tuned for different personality types, but human co-workers pay a price in lost socialization. The researchers used 2000 volunteers to set up two teams, one with pairs of 2 humans and one with human-AI pairs, to create advertising for a fictitious think tank. The results were described as follows:

“Humans on Human-AI teams sent 23% fewer social messages, creating 60% greater productivity per worker and higher-quality ad copy,” the researchers found. The human teams working with other humans produced higher quality images. The personality of the humans also mattered, with humans in the “conscientious” group producing enhanced images while working with open AI agents, while those in another group—branded “extroverted”—produced lower quality image and texts in their work while working with “conscientious” AI agents. Humans working with other humans sent more messages that were social and emotional including ones to build rapport and showed concern, the experiment found.”

Previous
Previous

Missing 401(k) Plan Participants?

Next
Next

Developing an AI Policy