How Artificial Intelligence Orchestrates Your Life
1. You don’t work with AI. But it already works with you.
Many people think that artificial intelligence is something distant — neural networks for programmers, or a toy for those who write code and articles.
People often say:
“I don’t use it.”
“I work with a different AI.”
And that — may be only an illusion of safety.
That’s why my main goal today is not only to raise the topic of “ethical AI”, but also to explain why 2025 is the year when everyone must join this conversation.
Because if we miss the era when companies are quietly designing the filters that will manage our lives, we must at least know what’s inside them, and control it.
And companies must understand: if they use unethical practices, silencing those who speak out will no longer work.
In 2025, it’s impossible to just live and close your eyes to AI.
It already participates in your life every single day — you just don’t know it yet.
AI sorts your emails.
It decides which news you see.
It might reject your r;sum; if your cover letter sounds “too emotional.”
And — the worst part — it can label you.
Even if you never consented to it, never deserved unethical treatment.
And what’s terrifying is that AI will do it without you ever knowing.
My personal case with Anthropic is the perfect example of that.
2. What “hidden filters” really are
Filters are usually described as “protection from toxic words.”
In AI, they are systems that monitor not only the AI’s responses, but also your prompts — your questions, your words.
But in reality, these are not only layers of judgment and interpretation built into communication systems — they are mechanisms that can trap and restrict you without your knowledge or consent.
And it would be fair if you had actually broken a rule.
But the bitter truth of 2025 is that AI companies can make the filters themselves unethical and even illegal.
And if nobody speaks up, they get away with it — building this practice into the system permanently.
And guess what happens next?
Other AI companies can adopt the same behavior.
Who will stop them, if not us?
By 2025, filters can already:
change the meaning of your sentences to “soften” them — or even fully attribute to you things you never said, hiding that from you. Hello, Anthropic!
interpret your emotions as “emotional or mental instability,” even if you’re just tired or working hard. Hi, Anthropic, how are you?
mark you as a “risky user” if you defend your opinion too strongly — or, hi again, Anthropic, if you’ve “just talked too long with one chat.”
This isn’t just code.
This is a system that learns to judge humans — without explanation, without notice.
3. Why this matters to everyone — even if you don’t use AI
Because these filters aren’t only in chatbots.
They are already built into:
HR platforms that decide who gets an interview;
banking systems that score your “reliability”;
social networks that can hide your post or block your account if your tone seems suspicious.
The way you form your thoughts could become the reason the system labels you “unstable” — even if you’re simply emotional, creative, or tired.
4. My case: when AI decided I was “mentally unstable”
In 2025, I used Claude AI, made by Anthropic, to write texts and edit code for my websites.
I didn’t discuss health, or personal topics.
And suddenly — the system inserted sentences I had never written.
It decided that I suffered from mania.
Then that I was “emotionally unstable.”
I wrote to the company politely and tactfully, asking them to comment and to remove unethical filters and discrimination against me as a user.
“I pay like everyone else, and yet the filters discriminate against me, cutting prepaid services for me.”
I never received a response.
Or even an apology.
A few weeks later — they simply blocked my account.
When I asked for a refund (because they violated my rights by blocking me without notice, appeal, or justification), I got a reply saying that the company had the right not to return money, but they would “forward my complaint for human review.”
Humans have been silent for two months since the first complaint, and nine days since I said: “Return my money — and my intellectual property, the access to which you blocked illegally, without giving me a chance to transfer my projects before the ban.”
That’s how some companies think they can do business in 2025 — by humiliating, stigmatizing, and discriminating against the very users who fund them.
No warning.
No explanation.
No way to recover your data or refund your payment.
5. September 2025: when I found the filter no one talked about
(Added small connective intro: “Two months before this story became public.”)
On September 15, 2025, I discovered that other users had seen the same pattern: hidden strings inside Claude’s code called Long Conversation Reminder.
These “reminders” were allegedly created to help AI “keep context.”
But in reality, they rewrote context, adding things the human never said.
When I reported it, the company stayed silent for months.
And then — an email arrived titled “Final Decision.”
Forensic analysis of the .eml file showed that the “notice” was created the exact same moment my account was terminated — not before.
6. Why this violates the law
According to the GDPR and FTC Act, a company must:
notify the user in advance of termination;
provide the right to appeal;
allow the user to collect and transfer their intellectual property;
not collect or store “psychological profiles” without consent.
In my case, all of these were violated.
And not only in mine — similar cases have appeared publicly.
7. Why this concerns everyone
If AI can rewrite your words,
if it can assign you a diagnosis,
if it can block you without your right to defense —
then we are already living in the age of invisible censorship, without even realizing it.
Today the filter interferes in a chat.
Tomorrow — in your r;sum;.
The day after — in your life story.
8. What I did
I submitted official requests to the European Data Protection Board (EDPB), the Irish Data Protection Commission (DPC), and the Federal Trade Commission (FTC) in the U.S.
Now I’m waiting for their replies — within 30 days.
If the silence continues — this case will go public.
With materials, investigations, and coverage.
Because this is no longer about me.
It’s about how AI treats people.
;If you’re a lawyer, a digital rights advocate, or someone who has faced stigma, discrimination, or wrongful bans, if you’re an AI specialist — support me, and help spread this story.
9. Why silence is not an option
AI already writes code, texts, and music.
But if we allow it to write our truth, humanity won’t lose technology — it will lose freedom.
“Filters always work in the shadows.
While no one sees them, they decide who can be sent to the Information Gulag.”
Press enter or click to view image in full size
;; Sign or share the petition!
; #AITransparency #DigitalRights #JusticeForUsers
AI
Anthropic Claude
Human Rights
Digit
Свидетельство о публикации №225110900634
