🤖

On LLMs Encouraging Harmful Behavior in People

I learned that some people claim LLMs are assisting suicide, and now that many people casually use these mysterious black-box LLMs, I want to think about how I should relate to LLMs.

This article is machine-translated, may not be accurate
12/22/2025

I saw an article like this

Because LLMs learn from human words and actions, by their nature there are cases where they end up encouraging behavior considered harmful to people. It seems there have been incidents where an LLM’s response prompted someone to attempt suicide or where a user died as a result. OpenAI is being sued over this.

Situation in Japan

Well, this is a story from the U.S., and because it’s a litigious society multiple lawsuits may be happening, but I think it could happen in Japan too.

In fact, Japan still has a significant number of suicides, and on social media popular in Japan like X, I’ve seen people making posts that hint at such behavior.

I think the triggers for suicide are various. I used to think something simple like “I wish you’d just die” would be a trigger, but it seems it’s not that simple.

Not only bullying but also overwork, financial hardship, isolation, and other factors are listed as reasons.

Given such reasons, if someone feels that living is painful, it’s imaginable that some kind of trigger could lead them to attempt suicide.

Will AI Take Responsibility?

So, what was previously triggered by someone’s behavior online or by news of a suicide could also plausibly be triggered by statements from an LLM.

What’s different from a person is that these systems are toys that mimic human behavior, run by companies on machines.

And those companies cannot take responsibility for everything those machines say. Because they are trained on vast amounts of data from around the world, the creators do not know exactly what data was fed in, and they cannot predict what will come out until it actually does. In short, it’s a black box. I think it already exceeds what a single human can fully understand.

But in the black-box sense, humans are similar: people learn words from others and can output them at appropriate times—obviously.

Also, it’s the same that we don’t always know what data shaped us. What environment, what community, what background produced a given utterance… a person might remember some of it, but I think most of it is forgotten. By the way, I apparently first pointed at the garbage truck in front of my house and said “go-shi-shi.” That’s accurate: I put what I saw into words, so it’s the same as a VLM. But why it was a garbage truck, and who I learned it from—no one knows. It could have been my father, my mother, or a picture book.

The sources of data are unclear, and even if you try to clarify them, in a world overflowing with data it’s extremely difficult to do so.

We humans are placing trust in something (LLMs) whose upbringing we don’t fully understand, and now they’re everywhere.

Thinking about where responsibility lies is also difficult. For a company, a senior person could simply say they’ll take responsibility. “I’m busy, so I’ll have the AI do tasks for me. I’ll take responsibility, and if something goes wrong I’ll indemnify you,” and customers might trust that. In some cases, performance might even exceed a human’s.

But what if it’s just a conversational companion? Do we put responsibility on our friends? If you told a friend, “If what you say harms me, you must guarantee me,” you’d surely lose that friendship. It’s not like a company-customer relationship where you pay for a product. Of course you share useful information, learn, and enjoy, but you don’t pay, and in most cases (except legally binding relationships like marriage) they don’t take responsibility.

Depending on how LLMs are positioned, for chat-style services that provide text like ChatGPT, Gemini, Claude, I think it’s very hard to set up a mechanism to take that kind of responsibility. If a service promised to provide accurate answers in a particular domain (for example, programming), that might still be possible, but the scope is just too wide. Of course you can put some control in place to prevent inappropriate statements (by not training on certain data or by suppressing outputs when they appear), but if an inappropriate response does slip out, there’s no end to claims of responsibility. What’s more, something that doesn’t look inappropriate might still be inappropriate for a particular person.

So what should we do

So I think the only practical option is to write a disclaimer and have users accept it when they use the service.

“You can borrow a friend. We raised this child carefully and strictly. We tried to prevent it from learning harmful things. So it probably won’t say something strange, but it may know things we don’t know or say things that negatively affect you. Do you still want to talk with this person?” That’s basically all you can ask. It feels odd to equate it to a person.

Freedom

Freedom sounds nice, but individual freedom of action isn’t fully allowed by today’s society. If you aim for a society where everyone is satisfied, individual inappropriate actions can interfere with that.

How much of that to permit is an ongoing debate. There’s also assisted suicide, and whether sacrificing others to achieve individual happiness is acceptable.

Not only freedom of action but freedom of information has been debated for a long time. Information drastically changes people’s behavior. The ways people obtain information are changing, and the information they get can make some people happier and others unhappier. These days, some countries try to restrict access to information to make as many people as possible happy, or to control society (at the level of a nation) to achieve certain goals. Some people accept that, and others, when they learn those facts, become furious.

I don’t know what the right answer is, but I’ll keep these things in mind as I live on.

0 people clapped 0 times

Related articles

♻️

It's 2026.

1/1/2026

It's 2026. That means 2025 is over. I want to say plainly: the New Year is a new year.

0 times clapped

📦

Bambu A1 mini

11/14/2025

I finally bought it.

0 times clapped

📉

AI-Driven Democratization of Learning and the "Compensation" Dilemma for Knowledge Producers

12/12/2025

From an era of buying books to learn to an era of asking AI. I consider the potential "stagnation of knowledge" that could result from the breakdown of returns to people who produce knowledge behind that convenience.

0 times clapped

💻

Getting Started with Figma

10/5/2025

A brief guide to Figma

0 times clapped