You just got an unread message notification.
Maybe it's on Slack (or Teams), a text message, or an email. Maybe you were tagged in Notion or an Office doc.
You open it to find several paragraphs of text - or perhaps a listicle - with all the hallmarks of AI-generated writing: headings, heavy formatting, and "it's not X it's Y", with em-dashes sprinkled gratuitously throughout.
The person who sent it probably spent about ten seconds on it. They asked a chatbot and forwarded its response to you verbatim without validation or critical review.
But now you're expected to read, vet, and act on this wall of text. When someone forwards text they themselves have not considered, they are asking you to do work they chose not to do. The asymmetric effort makes it rude.
Examples
The team is mid-discussion about whether to expand into the German market.
I asked Claude about this! Here's what it said:
Expanding into the German market presents both significant opportunities and notable challenges. Germany is the largest economy in the EU and offers a robust consumer base. However, businesses should consider the following key factors:
1. Regulatory Environment
Germany has strict regulations including GDPR compliance, local labour laws...
[6 more sections follow. None mention your company, your product, your budget, or reference the discussion already underway.]
The Eager Beaver
A conversation participant wants to contribute to the topic at hand, so they ask a chatbot and share whatever comes back. The intention is good - they genuinely want to help - but the wall of generic AI text they contributed blocks the discussion already underway. Now other participants have to scroll past it to continue, or stop to read and validate it.
It feels helpful to send. It creates work to receive.
Does anyone know why our email open rates have been dropping? We changed the subject line format last month.
ChatGPT says:
Email open rate declines can be attributed to several factors. These include changes in subject line strategy, sender reputation issues, list hygiene problems, and deliverability concerns. Here are the most common causes:
1. Subject Line Fatigue
If subject lines have become too similar or predictable, subscribers may stop engagingβ¦
[Provides 5 more sections of generic email open diagnostics. Does not mention your subject line change, your audience, or your platform.]
The OrAIcle
Someone asks a specific question. Another person puts it into a chatbot and pastes the response as the answer.
"ChatGPT says" is the enshittified LLM-era equivalent of LMGTFY (Let Me Google That For You). Shared as a link or a GIF, LMGTFY was easy to ignore, and clear about what it was (sarcastic commentary). Sloppypasta is neither. Recipients are left to figure out whether it's AI generated, whether it's correct, and which part actually answers the question (if it's actually relevant at all). If you ask a person a question, you're looking for their perspective and expertise. In this sense, both LMGTFY and sloppypasta are etiquette failures where sender disregarded the recipient the dignity of the basic human reply.
Hey team - I did some research on our competitors this week. Here's a summary:
Competitive Landscape Overview
The market is highly competitive, with several established players and emerging challengers. Key competitors offer distinct value propositions across pricing tiersβ¦
[It's a 5-page essay with handwavy assertions and no concrete details. No dates. No sources. No live pricing.]
The Ghostwriter
The sender shares AI output as their own work, with no indication a chatbot wrote it. Recipients have no reason to question it, and may act on information that is out of date, incomplete, or simply wrong.
Using AI as a ghostwriter borrows the sender's credibility. If the content turns out to be wrong, that credibility is what gets spent.
Why it matters
| As a Recipient | As a Sender | Feedback loop | |
|---|---|---|---|
| Effort | Previously, effort to read was balanced by the effort to write. Now LLMs make writing "free" and increase the effort to read due to additional verification burden. | Writing requires effort, which contributes to comprehension. LLMs increase cognitive debt by reducing struggle. | Sender's skipped effort becomes recipient's added effort, increasing frustration as incidence increases. |
| Trust | LLM propensity for hallucination and capability to bullshit convincingly mean that "trust but verify" is broken. All correspondence must be untrusted by default. | What you share directly influences your reputation. Sharing raw LLM output - especially unvetted - burns your credibility. | Eroding trust from LLM sloppypasta is the modern 'Boy Who Cried Wolf.' |
Simple guidelines
Read the output before you share it. If you haven't read it, you don't know whether it's correct, relevant, or current.
Delegating work to AI creates cognitive debt. Working with the results helps run damage control for your own understanding.
Check the facts before you forward them. Anything you forward carries your implicit endorsement -- your reputation depends on managing the quality of what you share.
LLMs are trained to "be helpful", and will produce outdated facts, wrong figures, and plausible nonsense to provide a response to your requests. Further, an LLM is inherently out-of-date; their knowledge cutoffs contain at best information on the state of the world when their training started (months ago).
Cut the response down to what matters. Distilling the generated response to the useful essence is your job.
LLMs are incentivized to use many words when few would do: API-priced models have a per-token incentive to train chatty LLMs that use many tokens, and research shows that longer, highly formatted posts are often preferred as more engaging.
Share how AI helped.
If you've read, verified, and edited it, send it as yours -- preferably with a note that you worked with AI assistance. If you're sharing raw output, say so explicitly. In both cases, it may be useful to share your prompt and how you worked with the AI to get the final output.
Disclosure restores the trust signals that sloppypasta destroys and tells the recipient what you checked and what they may be on the hook for.
Never share unsolicited AI output into a conversation.
Remember that AI generations create effort asymmetry and be respectful of those you share with. Sloppypasta delegates the full burden of reading, verifying, and distilling to a recipient who didn't ask for it and may not realize the effort required of them.
Share AI output as a link or attached document rather than dropping the full text inline.
In messaging environments, a large paste takes over the viewport and crowds out the existing conversation. A link lets the recipient choose when - and whether - to engage, rather than having that choice imposed on them.
AI capabilities keep increasing, and using it to draft, brainstorm or accelerate you will be increasingly useful. However, using AI should not make your productivity someone else's burden. New tools require new manners.
Use AI to accelerate your work or improve what you send.
Don't use it to replace thinking about what you're sending.