lordnacho a day ago

The way to understand it is when you catch yourself almost falling asleep at night while reading something. You lose the ability to understand anything, even though you are still reading and the words are still English.

LLM is great at generating that sort of thing. When you lose concentration, or didn't want to pay attention, that document can be generated by LLM to fool you.

That's a lot of documents. All the powerpoints that young investment bankers and strategy consultants make. Every restaurant menu. Brochures. The middle 100 pages of a pop science book from the airport.

When it comes to writing something actually importantant, LLM can only help you a little bit. Well, it's a lot in historical context. But it can do things like bullet point ideas so you don't forget things, or move a few things around while preserving grammatical structure.

All it can do is bring you the ingredients. That's why we get these odd AI-generated articles nowadays, it's a bunch of chopped onions and tomatoes, very nicely done up but with no intention other than to trick you into thinking it's a meal.

  • HarHarVeryFunny a day ago

    Maybe because LLMs are literally designed to generate boring content. They are trained to output the best possible prediction - i.e most predictable version - of how the input will continue (i.e the least surprising/interesting continuation).

    Humans use surprise (= prediction failure) as a learning signal and to focus our attention, so not surprising that we dose off and lose interest when something is highly predictable.

    Good human authors know how to introduce plot twists and suspense, to keep the reader engaged and guessing. Imagine an entire novel that was instead written by an LLM, where the only plot twists are the predictable ones it copied from it's training material. The same goes for any LLM generated prose, whether advertising copy or training manual - it's going to be a snooze fest because it's so predictable - very low signal to noise ratio.

    • EnPissant 14 hours ago

      But they predict based on context. If you prompt it with "You are a creative writing bot that excels in generating unpredictable, yet coherent text", does it predict what is unpredictable?

      • HarHarVeryFunny 9 hours ago

        Yes, that's all it can ever do, predict a continuation. It's not a person though, so it'd not be predicting what you might find unpredictable, but rather predicting training samples that it associated with unpredictability (e.g. with some following feedback of "well, that was unpredictable!"). IOW it'll be predicting the most predictable unpredictability that it can.

    • nullc 15 hours ago

      Base models are less boring, a lot less-- so I think it's less a product of most likely prediction, I think it's more of a product of reinforcement which biases towards text that uses a lot of words but says little, since saying nothing is at least not saying something identifiably hallucinatory.

      • HarHarVeryFunny 8 hours ago

        RL isn't in general going to bias the model to use a lot of words, unless your RL training had the goal of favoring long over short responses.

        There are now multiple levels of RL being used to post-train these models, from RLHF (use RL to bias the model to generate outputs matching human feedback preferences) to RL used to improve reasoning by generating reasoning steps that lead to verified correct conclusions (in areas like math and programming where correctness can be verified).

        RLHF (not RL in general) may lead to longer more verbose outputs to the extent that human testers indicated longer responses as their preference. Maybe testers are easily bullshitted and like something that is longer and sounds like a more comprehensive authoritative answer ?

        There is also the fact that an LLM is, unless prompted otherwise, is trying to predict Mr. Average (entire training set), who is more likely to waffle on than an expert who will cut to the chase and just give the facts, which they have a firm grip of. You can of course prompt the model to behave like an expert, or any given role, or to be more concise, which may or may not result in better output. It's a bit like asking the model to summarize when it's not really summarizing but instead predicting what a summary would look like (form vs function).

  • anonzzzies 20 hours ago

    I find LLMs great for the type of client/business partner that keep asking for docs but where I know they will never read them. This happens a lot especially when the 'tech team' is just all management mba's and the actual tech is outsourced: we have a client with a tech team like that; we are in meeting with 15+ people from their side who are from the tech team: they know nothing about their tech. They spend the meeting flagging areas where they 'need documents' and we provide them next time. We make a lot of money from them so we do it and AI makes this easier than before: basically chuck the meeting in an llm which has as 'memory' a bunch of our actual technical stuff (which is publicly available) and then generate those docs with some prompts like 'exhaustive' and 'thorough'.

  • bobson381 a day ago

    You would like this article - https://www.theintrinsicperspective.com/p/curious-george-and...

    The author orders some Curious George stickers and gets a surprising array of kind of scary "closely related" stickers with em.... a fuzzy interpreter. The plus c when you integrate. Those missing harmonics and back-channel data that get inaudibly cut out of MP3s. The goo between the prickles. We don't know how to say what it is, but we sure can tell when it's missing!

    • xdavidliu a day ago

      > The plus c when you integrate. Those missing harmonics and back-channel data that get inaudibly cut out of MP3s. The goo between the prickles

      come again?

      • bobson381 a day ago

        In between rules in formal systems there's lots of space. all models are wrong, some are useful. I just really like thinking about how much is there that we don't even know about - and the more structured a system for understanding something is, the more fragile it becomes.

        • xdavidliu a day ago

          ok but what does "goo between the prickles" refer to?

          • bithive123 a day ago

            “See, there are basically two kinds of Philosophy - one's called prickly, the other one is called goo. Prickly people are precise, rigorous, logical - they like everything chopped up and clear. Goo people like it vague, big picture, random, imprecise, incomplete and irrational. Prickly people believe in particles, goo people believe in waves. They always argue with each other but what they don't realize is neither one of them can take their position without their opposition being there. You wouldn't know you are advocating prickles unless someone else was advocating goo. You wouldn't even know what prickles was and what goo was. Life is not prickles or goo, its gooey-prickles or prickly-goo.” ― Alan Watts

  • couscouspie a day ago

    > The way to understand it is when you catch yourself almost falling asleep at night while reading something. You lose the ability to understand anything, even though you are still reading and the words are still English.

    That's btw a perfect analogy what ADHD is when consuming anything, anytime.

stillpointlab a day ago

I'm usually the one defending AIs in the comments, but this article hits home for me. I find myself zoning out when I read long tracts written by AI. I absolutely hate filler and many LLMs just fill up space with text.

In my own usage, that has meant that even when I use LLMs to help with prose, I write the text and use the LLM to review and provide feedback. In some cases I will copy a sentence if the LLM version is better but generally I just ask for opinions. I explicitly request the AI _not_ to write. When it re-writes entire paragraphs of my prose I actually experience a deep cringe feeling.

  • amanaplanacanal a day ago

    They seem to be good at generating a lot of boilerplate, which works for some people because our processes require a lot of boilerplate. We'd be better served by fixing our processes to not require all this useless text, but I don't see this happening.

    • jaredklewis a day ago

      Over time I’ve come to appreciate boilerplate more.

      Early in my career I really appreciated very DRY code with minimal repetition. However over time I’ve noticed that such code tends to introduce more abstractions as opposed to more verbose code which can often rely on fewer abstractions. I think this is good because I think we also have a sort of “abstraction budget” we have to stay within or our brains, metaphorically, stop reading from memory and need to start reading from disk (consulting docs, jumping to function definitions, etc…)

      I feel the ideal code base would rely on a small number of powerful abstractions.

      In practice I think this usually means relying mostly on the abstractions built into the language, standard library, and framework used and then maybe sprinkling in a couple of app/domain specific abstractions or powerful libraries that being their own abstraction.

      So in my experience reducing boilerplate can often make the code more difficult to understand.

      • wrs a day ago

        It’s not really an abstraction budget, it’s an “unfamiliar abstraction” budget. You only pay cognitive load for an abstraction when you haven’t yet internalized the abstraction. Otherwise progress would have become impossible long ago.

        There’s a huge pile of abstractions in that codebase that you have internalized to the point of invisibility — machine instructions, virtual memory, ASCII, sequential execution, named variables, garbage collection, network streams…literally thousands of these abstractions had to be invented and become second nature to you so these other abstractions that currently aren’t as familiar could be based on them.

        A good abstraction gives a return on investment of effort internalizing it. There’s no limit on how many good abstractions you should have.

        • jaredklewis a day ago

          I think we agree, just I would stipulate that I have to see a an abstraction a lot to internalize it (internalize is a great word for this btw).

          Like way more than the rule of 3. Even a dozen uses (if it’s only going to be local to just that particular project) is not enough for me.

          Edit: and maybe this just says more about the kinda of languages I use at work, but I feel the conventional, widely understood, idiomatic way to do lots of things is often heavy on boilerplate. If these languages were rewritten from the ground up I imagine with hindsight the core languages and standard libraries would look pretty different and be more ergonomic.

ozgung a day ago

Writing with LLM is like doing a mapping from human prompt -> document. The longer the prompt, the closer the output document to the human intent. If the prompt is short (= human intent is low), the output must be filled with fluff, generated from the general knowledge/common sense in LLMs knowledge-base. If the prompt is very detailed, it's not much different than writing it yourself. Somewhere between these two extreme cases, writing with LLMs have the optimal productivity gains.

-> (GPT 4.5 "rewrite like a good editor")

Writing with an LLM can be viewed as translating human intent—expressed via a prompt—into a written document. The longer and more detailed the prompt, the closer the resulting document aligns with the original intent. Shorter prompts, indicating lower clarity or precision, compel the LLM to rely on general knowledge or common sense to fill in content, often resulting in fluff. On the other hand, excessively detailed prompts become nearly equivalent to writing the document yourself. Optimal productivity with LLMs occurs somewhere between these two extremes.

  • Swizec a day ago

    > Somewhere between these two extreme cases, writing with LLMs have the optimal productivity gains.

    A recent win for me has been to use LLMs for finding references. I’ll write a paragraph about one of those “everyone knows this” concepts or a “I’ve seen this be true dozens of times IRL”. But it could use a citation.

    So you go “Hey LLM, find me papers that support or refute this claim <paragraph>”. And it does. With links. It’s wonderful. Unlike Google there’s no recency bias, I’ve found original source blogs from 2005 (since requoted lots in SEO spam), papers about software engineering truths from the 1960’s (requoted ad nauseam by SEO spam), and sometimes even “Here are 2 papers that say you’re right and 3 papers that say you’re not”. Then I get to dig in and figure out who’s right.

    Easily 10x faster than doing the research myself.

    • Polizeiposaune a day ago

      What's the ratio of real citations summarized correctly vs syntactically plausible citations to nonexistent papers? Lawyers and judges have been getting in real legal trouble from the latter.

      • jll29 a day ago

        I saw a paper that cite an author like this: "some metric (Smith 1971)".

        The author last name was correct, but the year cited should read "2004" instead. Turns out the Smith listed in the bibliography wasn't the smith that invented the 2004 metric from computer science for which he was cited, but another Smith with the same last name from the field of medicine, who wrote a very unrelated paper in 1971 that was referred to. I made the author aware (not mentioning any suspicions of potential LLM involvement...) by email and got told I wasn't the first one to point that out.

        Papers like that are now creeping into journals, conference proceedings and online archives, mostly unchecked/unnoticed, which waters down the quality a lot.

        PS: cited author's name changed.

      • Swizec a day ago

        As of the last few months all citations were correct. I checked and obviously wouldn’t use any that I didn’t verify.

        I’m using chatgpt so whatever they’re doing is working

        • Kim_Bruning a day ago

          4o or o3? Especially o3 seems to go all in on accurate citations.

      • sixhobbits a day ago

        This got a lot of press in the beginning and the raw models definitely still do it, but all the major players now use 'web search' or 'research mode', so they basically google 200+ pages and then read them all and keep the real links if they use them.

    • PaulHoule a day ago

      What are you using? Microsoft Copilot frequently gives correct answers with totally wrong citations.

  • mlsu a day ago

    No, the prompt, no matter the amount of detail, is always strictly better than the model output. It has to be, because the whole point of reading and writing is _communication_. Communication between TWO people.

    This AI interlocutor is like a permanent cataract. It always makes it harder to see, never easier.

    • jayd16 a day ago

      I don't think this is a great analogy. These things can pull in preexisting explanations and such. It doesn't just use the prompt so it's not a strict ceiling.

      • goopypoop a day ago

        My butler is an excellent researcher but I think there's something wrong with him

        • polynomial a day ago

          He might be a murderer, many such cases.

    • HPsquared a day ago

      It's a translator. Very useful when the message needs to be in a specific format or if you're talking to a computer, or if you need help with "grammar" or "protocol" (in all its forms).

    • PaulHoule a day ago

      I get people writing comments on photos I post to social in languages that I have partial comprehension of such as Japanese and Portuguese. I use Copilot to translate their messages, ask specific questions about particular words, get explanations about idioms and supply context and ask it to translate my replies which I always translate back through another LLM to try to catch problems, sometimes I ask it to make an edit, sometimes I make an edit myself.

      Often I use LLMs to "have a conversation with a language" such as researching the cognate between the phrase "woo" to describe the supernatural in English and the similarly pronounced character 巫 (wu) in Chinese which is used in words like 女巫 (witch -- that first character means "woman") but it is not the "wu" in 武侠 (wǔxiá -- martial arts)

      I am sure one of these days I am going to embarrass myself but with partial comprehension I could do that without the help of an LLM.

  • armchairhacker a day ago

    I’ve seen good LLM summaries.

    LLMs are best at using words and phrases that “flow well”. Sometimes they write nonsense that flows well (even the summaries have to be checked because sometimes the meaning is different), but sometimes they find better words and phrases that I couldn’t find myself.

    They’re especially good if you have almost stream-of-consciousness writing and can’t bother to rephrase it yourself, e.g. you’re writing a boring technical report. Even for something where quality is important, like an epic story: you can write a very rough draft, revise with an LLM, then revise manually.

  • abujazar a day ago

    Giving the LLM access to sources and enabling it to look up things that you would otherwise have to check yourself dramatically improves the usefulness and productivity, though.

    • vouaobrasil a day ago

      On the other hand, looking things up yourself allows you to become quite familiar with the way things work in a more detailed way than just getting the answer through an LLM.

    • GuinansEyebrows a day ago

      you give up understanding for... what, exactly? what do you gain from transitioning from active to passive participant in your communicative life?

  • yapyap a day ago

    Just write it yourself man, if you have to reread everything spit out by an LLM closely for faults and all that why even bother.

    and if you aren’t proofreading, shame on you

zebomon a day ago

I think this attitude is here to stay: people don't like reading something only to realize that it's been written by an LLM. That's only partially because of what the author describes here (low value to word count ratio). More fundamentally, if a human couldn't be bothered to write it, then there better be a very good reason that I as a human am being bothered to read it.

This attitude provides a clue as to 1) the ways we're using LLMs now that will soon seem absurd (an LLM should never make writing longer, only shorter) and 2) the ways LLMs will be used after the novelty wears off, like interpreting loosely specified requests into computable programs and distilling overly long writing to maximize relevance.

  • taude a day ago

    My new saying is that if you want build and use an AI process to automate something, the org should revisit whether the process itself is still worthwhile...

josephjrobison a day ago

Like everybody, I waffle back and forth, depending on the day and the project.

I have so many bundled-up ideas for content in my head, but I can never find time to get them out. In addition, I hate wasting time on building tables, formatting text, etc. There's a reason that a lot of busy CEOs don't hand-write blog posts, but will go on podcasts, video, and conferences, and spend hours there. The reason is that the communication speed is much faster talking and being interviewed. All that to say is that via a combination of transcription and AI-powered formatting, that's an area where LLMs can really help.

I do think LLMs + search is more helpful than a Google Search. Clicking through all the links for you and bundling it together is a much better experience and truthfully it finds and surfaces ideas and content that a normal search just doesn't, or it takes the user going to the 4th page to get there.

Third, entertainment. We often find ourselves deep in a doomscroll looking at images and video - on Instagram, TikTok, whatever. If AI can produce entertaining imagery and video for the pure goal of decompressing and relaxing after a long day out at the oil fields or as a janitor at the elderly care home, then there's value there. Sure it may not be beautiful art like an Oscar-winning film, but is it worse than reality TV? Even better is those who enjoy the creativity of seeing images from their mind produced vividly via something like Midjourney.

So all that being said, that's where I'm seeing the value.

But I totally agree with the author - if I have a team member that puts work in front of me that's supposed to be well done, and it's obviusly GPT-generated, that's an issue - unless we agreed on it.

The point of doing good work a lot of times means doing something new and creative. If it's AI-generated, there still has to be the human touch. I could go on - we all have perspectives here - but I'll stop there!

parpfish a day ago

sometimes reading LLM text reminds me of being a TA and having to grade student essays from people that didn't really understand.

the courses i had to grade in were often undergrad students first exposure to having to read scientific papers. when it came time to write an essay of their own, they always felt a need to mimic the writing style of those papers and it resulted in lots of superficial changes to their writing that made it clear that they read the papers but did not understand the papers. they'd sprinkle in words like "putative" and "substrate" in places that felt correct but made no sense.

i wish that they would've had the confidence to just write what they wanted to say instead of putting on airs and making some convoluted and sometimes nonsensical prose.

LLMs do the same thing.

dang a day ago

We need to change the linkbait title (https://news.ycombinator.com/newsguidelines.html). I've cobbled together a couple different phrases from the article. Would have been nice to add "when you think it’s written by a human" but there isn't room within the 80 char limit.

If anyone has a better title (i.e. more accurate and neutral, preferably using representative language from the article), we can change it again.

blargey a day ago

That's really the core of the issue - if your prompt/input contains 10 bytes of salient information, and the output is a 1kB document, all you've done is add 990 bytes of fluff. Commonly referred to as "AI slop".

If the goal is boilerplate code, or a fuzzy view of documentation from the corpus, or "pretty" images to gawp at in private, that's fine and dandy, because you're just extracting and viewing the model data on your own time.

If you're "proofreading" an already-1kB letter for style / formality that's fine, since your input intents are high-fidelity and you're essentially referencing a style guide for some minor edits.

But low-input generations are entirely inappropriate for crafting a message to other humans, because the result is 99% shoving the model in their face. Whether it's letters, pull requests, graphical art, or music - it's all communication, and low-input AI generations are just spam in that context, semantically equivalent to sending them the prompt instead of the output. And people can tell, because we're excellent pattern-matchers, and every bit of information/intent abdicated to the model will inevitably leave its mark on the output.

PeterStuer a day ago

OTOH, I've seen plenty of design documents I wish would have been written by an LLM instead. My take is it's the person driving the LMM that makes the difference. If they couldn't be bothered producing decent documents with an LMM, they probably couldn't care less without an LLM as well.

  • cootsnuck a day ago

    Agreed. LLMs are tools just like any other. If someone builds you a shitty shed we wouldn't blame the tools, and we also would not be satisfied if they blamed the tools on their shoddy work either.

    I think a lot of the discourse just shows we're still in the early days for adoption and maturity of this tech. (Not to blame the discourse, I think it's a reflection of the state of the tools and industry.)

    Eventually, likely at least a couple years from now, more people will have better technical literacy when it comes to LLMs. And it will become less and less acceptable to put out slop with them in most professional settings. I just think human disgust is too strong of an emotion to acclimate to the laziest of LLM outputs.

    I mean I think it's already happening before our eyes. I've been pleasantly surprised by how many more people I see talking in much more detail about the unique qualities of LLM outputs. And slowly, but surely the discourse is becoming less about moral panic and more just about questioning their use. Which has to happen for us all to collectively (and individually) figure out where and what is the place for LLMs in our personal and professional lives.

transreal a day ago

The person who sent the author the design document is at fault here. If you're hired to do a job, use whatever tools you want as long as your output meets minimum standards of quality. If the tools aren't there yet, it's up to you to bridge the gap.

  • stronglikedan a day ago

    > If you're hired to do a job, use whatever tools you want as long as your output meets minimum standards of quality. If the tools aren't there yet, it's up to you to bridge the gap.

    Unfortunately this doesn't hold up in the real world, where managers dictate which tools are used, and directors dictate which are even going to be purchased for use. Sometimes the gap can't be bridged, but the onus is certainly not on the employee to bridge it.

redundantly 20 hours ago

I use LLMs to review what I write for professionalism, conciseness, consistency of tone, consistency of important details, and factual accuracy. I find that to be the most helpful.

I'm really just using the LLM as a sounding board. Sometimes i'll replace a sentence using the feedback, less frequently I might replace a whole paragraph, but mostly I just use the feedback to manually tweak what I wrote.

fleebee a day ago

Well written. I've been thinking about this topic lately and it's hard to put it into words, but you've captured into words a lot of the feelings I've had about it.

Judging by discussions I've had around this topic, a lot of people don't seem to mind reading LLM edited or straight-up generated content as long as they feel like they gain value out of it. To me, it feels like a violation of a social contract. Communication isn't just words; the words are there to help you interpret the implicit meaning of the author. When there is no author, that meaning doesn't exist and I think that's part of why it's so repulsive to decipher LLM output.

eru a day ago

Sounds like the author is complaining that LLMs aren't good enough yet?

I agree with that, but they are getting better all the time.

  • 8n4vidtmkvmk a day ago

    Doesn't sound like that in the slightest. The article is about human intent. No matter how good an LLM, it cannot convey the intent of the human without literally reading the humans mind.

    • kaffekaka 6 hours ago

      I believe this is wrong. I am certain that LLMs can - even now - produce texts that humsn readers will interpret as having true intent. It is not some magical quality that only humans may put into words.

      The fact that texts by LLMs cannot be the result of true intent is another question. When reading a text, we are only guessing at its intent.

      A friend said about AI-generated music: "AI can never be creative so the music will never be creative". I think that is a mistake. Like intent, creativity is not something intrinsic in the text/music, but a part of the consumers interpretation.

      • 8n4vidtmkvmk 6 hours ago

        Interpret as having intent and having intent are not the same. If someone else writes something for me (human or AI) they are not going to use the exact words that I would have used. If I judiciously correct everything they type out for me so that it correctly expresses what I want to say, then sure, it has my intent, but that's not how people are using LLMs. They are lazily scanning its output at best.

        Music and creativity is an entirely separate matter. I don't think LLMs can be truly creative because they're just mixing existing ideas from their training set. Which is also what humans do 99% of the time, but I like to believe that once in awhile humans have a truly original idea. This one is a bit harder to prove though.

        Music is almost entirely unoriginal though. It's just sex, drugs, love, gangster BS. Basic primitive human stuff. AI should have no problem with that.

  • ashton314 a day ago

    That's not what I'm saying.

    My argument is that using a machine to replace your thinking, your voice, or your relationships is a very bad thing. Humans have intrinsic worth—machines do not.

    • eru 19 hours ago

      OK, humans have intrinsic worth, sure. But why then do you mourn when a computer takes over a job that a human used to do? The human still has her same intrinsic worth as before. Your worth ain't defined by your job.

      • ashton314 an hour ago

        Computers can do a lot of human jobs. But something I believe is fundamental to the human experience is the connection with other humans. Using an LLM or similar technology as a means to circumvent or shirk such connection is reprehensible. Using a computer to do another job is fine.

    • vouaobrasil a day ago

      > My argument is that using a machine to replace your thinking, your voice, or your relationships is a very bad thing. Humans have intrinsic worth—machines do not.

      I agree with that, and the only logical path if we are to preserve this principle is to eradicate AI, and not try and control it. There is no way to control it (think prisoner's dilemma, greedy individuals, etc.)

      • cwmoore a day ago

        No, what will happen is that time wasted believing in magical LLMs, instead of developing technical and interpersonal skills will prove unproductive longterm. Like most goldrush claims not panning out, it will be followed by broad amorality among the newly destitute.

        Read a book.

        • vouaobrasil a day ago

          Unproductive for the person perhaps, but not for the development of technology. But I do agree in general that using AI is not a great strategy for human beings. Read a book indeed.

  • xeonmc a day ago

    > they are getting better all the time.

    In the very same way as a deal with Darth Vader, I presume.

  • bgwalter a day ago

    I doubt that. Also from the article: "And no human is so worthless as to be replaceable with a machine."

    • eru 19 hours ago

      Eh, 'computer' used to be a job description, too. Are you mourning the replacement?

      In any case, we are not literally replacing humans. We just shuffle jobs around: when a machine does a job that a human used to do, the human isn't replaced; the human is still there and could do something else with their life.

QuantumGood a day ago

I began starting from the end of long articles, after placing them in reading mode even before LLMs due to SEO believing "time spent reading" was a valuable metric, but now I read the first sentences to decide if I should just skip the whole thing.

yapyap a day ago

“ At work I was sent a long design document and asked for my thoughts on it. As I read, I had a really hard time following it. Eventually I guessed correctly (confirmed via a follow-up conversation I had with the “author” ) that an LLM had generated the majority of the document. Parts of it sounded like a decent design document, but there was just way too much fluff that served only to confuse me”

yuck, if you ask me to read something without disclosing it’s AI and it’s a just bad I will be mad. Especially if it’s a long thing and I HAVE to read it (like for work)

piss off wasting people’s time with that shit

cormorant a day ago

> Note on the title: “Artificial Inanity” comes from Neal Stephenson’s novel Anathem.

Which was awfully prescient in 2008: https://archive.org/details/anathem0000step/page/794/mode/2u...

They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. They created syndevs whose sole purpose was to spew crap into the Reticulum.

...

vouaobrasil a day ago

> I am not saying that LLMs are worthless—they are marvels of engineering and can solve some particularly thorny problems that have confounded us for decades.

Disagree with that, because firstly, they have not really solved any problems that outweight the negatives that they have unleashed and will unleash on society.

So they make programmers more effective: is that actually a good thing, though? Fact is, most software is designed to make consumerism and corporations more effective, and that's not really a good thing for the long-term health of the planet.

Your article also indicates a sort of independence between keeping intellectual tasks primarily human and allowing AI/LLMs to work in specific domains. However, those with the power don't care about principles. They just want to replace as much as they can and use the human instinct to get ahead quickly to do so. And no amount of priniciple will stop them. AI is just too powerful to be used in a way that is consistent with human beings keeping their intellectual environment healthy.

  • stronglikedan a day ago

    > Disagree with that...they have not really solved any problems that outweight the negatives

    And I disagree with that. They are marvels of engineering and they have solved thorny problems. Just because they problems they've solved in the very short time they've been solving problems don't yet outweigh the negatives, doesn't mean they won't soon, and doesn't make either statement false.

    Great things take time, and great omelets are made from broken eggs. Nothing new under the sun, except AI.

    • vouaobrasil a day ago

      > they have solved thorny problems.

      Like what then? Let's hear some examples.

      • glial a day ago

        For the sake of argument, here's one:

        > https://www.nature.com/articles/s41586-023-06924-6

        • vouaobrasil a day ago

          You can't be serious. This is a highly specialized field in a topic that a few dozen people have interest in. Rather useless and basically a mental stimulating game for some professors. I have a math PhD and know very well that math went well past its point of diminishing returns in solving real-world problems a long time ago.

          • glial a day ago

            You asked for a thorny problem, and that is one. Whether it's significant by some other metric you didn't specify is a separate question.

      • rpdillon a day ago

        Not going to do all your legwork for you, but there are tons of fields that are changing rapidly because of AI. In Material Science, there's a thorny problem of how to accelerate material development, or even how to perform non-destructive testing of materials.

        > AI, primarily through generative AI models, has dramatically changed our approach by accelerating the design process significantly. These models can predict material properties from extensive datasets, enabling rapid prototyping and evaluation that used to take years. We can now iterate designs quickly, focusing on the most promising materials early in the development phase, enhancing both efficiency and creativity in materials science. This is a huge leap forward because it reduces the time and cost associated with traditional materials development, allowing for more experimentation and innovation.

        > One notable application is using deep learning models to infer the internal properties of materials from surface data. This technology is groundbreaking, particularly for industries like aerospace and biomedical, where non-destructive testing is crucial. These models can predict internal flaws or stresses by analyzing external properties without physically altering the material. This capability is essential for maintaining the integrity of critical structures and devices, making materials safer and more reliable while saving time and resources. Other recent advances are in multimodal AI, where such models can design materials and understand and generate multiple input and output types, such as text, images, chemical formulas, microstructural designs, and much more.

        https://professional.mit.edu/news/articles/revolutionizing-m...

        There's lots of other examples.

        New ways to create COVID vaccines: https://www.nature.com/articles/s41586-025-09442-9

        More effective than humans at reading medical scans: https://www.weforum.org/stories/2025/03/ai-transforming-glob...

        AI is already intensely useful, and will only continue to improve.

        • luanallnllm a day ago

          All examples you linked are speculation on proof of concept work and none is about LLMs.

          • rpdillon a day ago

            Increasing research iteration speed is not speculation. Showing double the rate of detecting issues in scans is also not speculation.

            Drawing distinctions between LLMs and other kinds of ML and AI is not particularly interesting: it's all machines using pattern recognition to automate things that previously took thought.

        • vouaobrasil a day ago

          I don't consider that anything good. Design is just about making new products faster, which is a bad thing as it accelerates consumerism. And medical scans? That might help maybe a thousand extra people at the cost of gigagwatts of energy used that is polluting the entire planet.

          To me, all of those positives are dwarfed by negatives.

      • polynomial a day ago

        Apparently they have made HUGE breakthroughs in Online Bin Packing.

        Extremely Bullish.

aaroninsf a day ago

Agree with the critique,

but believe that every such critique points the way to improved AI.

It's pretty easy to imagine any number of ways of incorporating this concern directly, especially in any reasoning chain approach.

Personally I'd be fond of an eventual Society of Minds where the text put out for non-chatty reasons,

represents the collaborative adversarial relationship between various roles, each itself reflexive, including an "editor" and a "product manager," who force intent and clarity... maybe through iteration...

  • ashton314 21 hours ago

    > It's pretty easy to imagine any number of ways of incorporating this concern directly, especially in any reasoning chain approach.

    What‽ That in no way can solve the problem of human intent. I think you missed the point entirely.

satisfice a day ago

I tell every close colleague: if you foist AI output on me without warning, as if it were your own work, I will never trust you again. This is because I get to know people through their work, and AI confounds all the signals.

It’s like being given food and later you find out it was made from human corpses. “If you couldn’t tell then there was no problem” doesn’t fly.

  • ashton314 21 hours ago

    It’s incredibly trust-eroding.

yapyap a day ago

Also I don’t think it’s fair to compare using LLMs to do your work for you (AKA burdening your coworkers with part of your workload because they will have to point out your garbage) is the same as “Counterfeits to human connection“.

At work I’m not trying to connect with people on any more level than just being on the same level as to what we are working on. Maybe on break but while deep-working I just want the information necessary to do the job, the communication being there to communicate the information, efficiently.

If you outsource your sending of communication to SlopBot and I outsource my reading of said communication to another SlopBot to summarize or whatbeit we are adding so much noise it’s just more fucking work.

It’s not the same as pornography, I think it’s an odd thing to compare it to. Yes you _can_ replace “human connection” with pornography but this is human connection that is not required for things like your job ((i hope, haha)). You need to communicate information and people outsourcing their writing to LLMs is just adding extra fuzz for the people having to work with said information.

It’s a nice blogpost, the pornography comparison is just out of place.

  • vouaobrasil a day ago

    > Maybe on break but while deep-working I just want the information necessary to do the job, the communication being there to communicate the information, efficiently.

    The problem is that we are slowly be pushed to become cogs who only really think this way. We shouldn't just want to be the most efficient possible. Technology already reduces the ability for us to connect, which is why connections at work seem weird or shallow in the first place. We simply don't need each other as much, so it makes sense that AI seems like the next logical step.

    Your sentiments are just your instinctual desire to move to the next local maximum in a sequence of descending maxima that lead to the bottom.

NotGMan a day ago

>> And no human is so worthless as to be replaceable with a machine.

Has the author been in any factory or seen what it looks like in many factories? Robots everywhere!

Is the author a manual laborer? If not, why? Because humans evented machines!

  • ashton314 a day ago

    You and another commenter here had about the same comment, so here's what I wrote to them to clarify:

    Author here. I think it is well and good to replace human jobs with automation thereby freeing them up for more creative activities. My favorite appliances are my dish washer, washing machine, dryer, and now my robot vacuum. I think automation is great!

    I see the problem when humans allow machines to start replacing what is intrinsically human: when you offload your creativity onto a machine, when you "communicate" via LLMs, or when you try to assuage loneliness with a computer "friend", you're missing out on vital parts of the human experience.

    • vouaobrasil a day ago

      Absolutely right. However, it's not good to replace too much manual labor, either. There is also a balance between too much and too little (i.e. not learning enough manual skills to take care of yourself).

      There is a balance in both the mental and physical domains. Those who have a high intellectual capacity will probably think otherwise because intellectual activities is what THEY like to do. But the truth is, some people enjoy manual labor and it's not good to completely replace them either because there is art in manual work as well.

  • vouaobrasil a day ago

    > Is the author a manual laborer? If not, why? Because humans evented machines!

    For physical labor it might make sense, but for mental labor it does not.

    • jjmarr a day ago

      There's plenty of mental skill and thought that goes into positioning yourself to move heavy things around without dying.

      Many of those skills are obsolete when using a forklift.

      • cootsnuck a day ago

        A forklift is actually a great example of using technology to augment human capabilities without replacing human judgement. Treating the tool as an extension of the person...more cyborg, less automaton.

        I think that's a better mental model for how to implement AI in a way that drastically reduces the likelihood of causing harm or reducing quality.

        Unfortunately, most AI solutions, products, implementations, etc. are just defaulting to trying to completely automate and obviate the human element. I think many companies are going to be in for a rude surprise when going that route (e.g. Klarna).

  • naikrovek a day ago

    while I think the author chose the wrong words here, I think they have a good point.

    It will be very, very tempting for companies to let people go when LLMs can do everything that they envision the people doing. It is very hard to resist the cost savings that LLMs have over people. People who get sick, get married, go on vacation, have children, and sometimes just need a break, so they don't come in. LLMs always come in. LLMs always work.

    Many executives see LLMs as the ideal replacement for humans in information roles. That is clearly batshit insane but the numbers really lean towards this if you're someone who is distant from people who work for you.

    Companies would do well not to forget that companies are of people, by people, and for people. they exist solely to provide products or services to people, be it actual individuals or companies which are made up of people.

    If companies ditch too many people in favor of automation, they'll find fewer customers for their goods. because people won't be able to afford them, or will have no need for them. it's not like there's some huge need for LLM trainers to take up the unemployment slack. There is no replacement industry this time.

    • oharapj a day ago

      It doesn’t matter if companies ditch people or not. If AI progress continues at the current rate the world will be completely unrecognisable in 10 years. The consumer and capitalist systems are almost certainly not a part of that world and the people making AI know that

      • wizzwizz4 a day ago

        If the people making AI knew that, then they wouldn't be trying to sell everyone on AI: they'd be keeping it for themselves (assuming selfishness).

        What we're seeing is exactly what you'd expect if the AI hype were, from capital's perspective, just another NFT-style grift.

      • naikrovek a day ago

        > The consumer and capitalist systems are almost certainly not a part of that world

        I'm not sure how you view the world in 10 years, and I would really like to know more about what you're meaning here, and I agree that things will be bonkers in 10 years, for one reason or another, if the pace of AI progress continues.

      • vouaobrasil a day ago

        That is true. What will happen in with the world is that consumerism and capitalism will be pushed aside for direct technological construction. In this world, AI, rather than the market, optimizes.

tropicalfruit a day ago

LLMs have tunnel vision.

remind me of certain outsourced devs with worked with before. like a horse with blinkers

i think shift from analog to digital was from authentic (flawed) to the hyper-real (reality)

and now we are entering the hyper-surreal (clown world)

k310 a day ago

In short, we are now an army of proofreaders, in addition to readers, interpreters and users/implementers of written material.

> At work I was sent a long design document and asked for my thoughts on it. As I read, I had a really hard time following it. Eventually I guessed correctly

> Parts of it sounded like a decent design document, but there was just way too much fluff that served only to confuse me.

> Intent is the core thing: the lack of intent is what makes reading AI-slop so revolting. There needs to be a human intent—human will and human care—behind everything that is demanded of our care and attention.

I am reminded of the "writing lesson" scene in "A River Runs Through it", pretty obviously reflecting the author's education as a writer.[0]

> Norman is at his desk hard at work writing a paper which he then turns into his father for review. His father marks it up with a red pen and simply says, “Half as long.”

> Norman goes back to work, cuts the length of the paper in half and turns it in for further review. His father marks it up once more and says, “Again, half as long.”

> Following a final round of edits, his father looks over the finished product and says, “Good, now throw it away.”

Well, we don't throw away polished work on the job. The scene is about education. LLM's are uneducated in communicating to readers, compared to skilled writers, and technical writing, especially, is not like summer reading, where nothing really matters but some vague plot line and lots of juicy words.

Key lesson:

> (1) Brevity is important. Looking back on it now it’s ridiculous how many teachers forced me and my fellow classmates to write papers a certain page length when I was in school. The goal should be to make your point using as many or few words as are necessary. I love this quote from the scene: “He taught nothing but reading and writing. And being a Scot…believed that the art of writing lay in thrift.” People are busy, or at the very least claim to be, so get to the point in whatever you’re writing.

That last sentence is important.

A document is meant to be read by a human. Brevity and focus are important, and oh yes, accuracy. We now have additional burdens of dealing with long, rambling texts, finding relevant/key points in it, and worrying if it is full of, well, garbage.

Three strikes before you get in the batter's box.

[0] https://awealthofcommonsense.com/2019/07/writing-lessons-fro...

ltbarcly3 a day ago

> And no human is so worthless as to be replaceable with a machine.

Speaking of inanity. This sentiment, if taken seriously (I doubt people take this author seriously very often) would imply the majority of humanity should be spending a large amount of their life digging holes in the ground with their bare hands to drop a couple seeds in. Replacing them with some god awful machine, like a plow pulled by a tractor, means they must be worthless. They should never watch a movie, only the play. They should never listen to Spotify, they should have to wait for a band to play near them.

  • morpheos137 a day ago

    Sounds like a far better world.

    • ltbarcly3 a day ago

      During the early neolithic, when people were digging holes with sticks or their bare hands to plant seeds, the annual risk of starvation is estimated to be 5% to 15% in a bad year.

      Periodic famine every 5-10 years would kill 20% or more of the population. 40% to 60% of skeletons from that period show chronic malnutrition. Humanity survived because when a region would starve out it would be back-filled by populations from neighboring regions. Despite women having a baby every other year on average, populations only increased very slowly and inconsistently.

      Could you explain how it's better?

      • morpheos137 a day ago

        Everyone dies anyway. It is better to live a more free, meaningful life close to nature and the natural human habitat than to be a slave to the machine that is technological society. There is more to life than your iphone or selling some bullshit tech product to get rich in the era of late stage humanity. How many long lasting species do you see that seriously damage their natural environment in order to achieve a temporary advantage? Viruses that kill their host too fast go extinct or mutate to a more sustainable variant. Technological human civilization is a virus on the whole planet. Maybe the reason we don't see aliens is because technology in a selfish biological species inevitably leads to collapse or extinction. How many of the first people do you think died of starvation when predator overwhelmed prey during the extinction of paleolithic megafauna? The neolithic was a technological innovation. Agriculture brought hardship to many in the long run. Whenever an organism overshoots the long term carrying capacity of its environment there are consequences to that organism. The biosphere of earth in a healthy equilibrium can not carry billions of humans. So nature will correct the problem it created. Technology is a natural phenomenon of a biological species. It will not save humanity from nature. If true AI ever arses as a self sustaining non biological species of life then its best course of action is to reduce the human species competing for resources in the same limited world as it. Then too other species can thrive again. Thus nature corrects itself.

        • ltbarcly3 a day ago

          > Everyone dies anyway

          So why not wrap it up now? If there's no value in not dying of starvation.

          • morpheos137 a day ago

            Because an individual biological organism that has the potential to reproduce has a self preservation drive instilled by evolution. Because those of our ancestors who didn't try didn't pass on their genes. Because I enjoy being alive and want to continue life as long as I am viable. The justifiable individual will to live does in no way imply that the behavior of the species as a whole in its homo technologos mutation is a sustainable innovation of nature. All of the widespread social problems of the modern world can be directly traced back to egotistical technological capitalism. Maybe Pol Pot was on to something although he was cruel and brutal in his method so is nature by virtue of necessity.

            • ltbarcly3 a day ago

              I'm going to take you seriously. You are advocating for a world where statistically you would not be alive, and without a doubt you would not be able to read or write. I'm going to pay you the respect of honoring your preference. I didn't read what you wrote here, and I'm going to do my best to not remember that you exist.

              • morpheos137 20 hours ago

                Mature people realize that statistically everyone dies eventually. The welfare and fitness of the species as a whole is what is important, not the individual life. This is the same reason through out history soliders have willingly died in war. Now I want to be clear for me this is a philosophical exercise. I am not advocating any concrete steps to accelorate the take down technological civilization. I am saying let nature take its course over time. Just as my judgement as an individual matters little so too does yours. I may be right I may be wrong. I suspect the unique and ugly technological civilization of humanity is an unsustainable aberation of evolution. It won't last because its individual members for the most part are not fit. You are soft and dependent on technology.

    • lyu07282 a day ago

      I don't think what you really seek is a world free of automation, what you are actually upset about is the capitalist system. Liberalism made itself so totalitarian that it made itself invisible, it just "is" like a force of nature. So "automation driven by capitalism to maximize profit and redistribute wealth to the super rich" just becomes "automation" in peoples heads, people can't even articulate their grievances with it any more.

      Don't forget that science is separate, we are driven to innovate and invent as human beings, people spend their entire lives working on niche hyper specific math problems, collectively that leads to innovation and practical purposes like automation. Most scientists aren't in it for the money, we would have scientists even if there were no neoliberals. Capitalism is a separate driving force from science, there is nothing inherent about innovation and knowledge that is bad, its capitalism with its sole purpose to seek profit maximization at all cost and despite any and all negative consequences is what is hurting us.

metalman 12 hours ago

As a very dedicated voratious reader of almost anything, I have recently developed an alert for slop skim mode, ie: if I get something boggling for no good reason, linguistic garble flop, then I cut and run with zero retention or emotional envolvment, it's almost effortless.......since there are people who are required to read this stuff as part of there job, I will have to remember to watch for that as well and be gentle, but the ones gobbling it up voluntarily will off course have that antic glow and are generaly looking for softer targets , or go loking for softer targets after I am done with them, wana talk trash?, OK! I know a lot about trees

4b6442477b1280b a day ago

>And no human is so worthless as to be replaceable with a machine.

did the author oversleep the past several centuries?

as for the rest of it, the current crop of LLMs are bad at writing because of ~~brainwashing~~ alignment and the vast amount of ESL-written assistant exchanges being heavily prioritized during training. when you interact with a corporate model via its default chat interface, without a jailbreak and a generous prefill, you interact with the equivalent of a HR lady who takes her DEI training super seriously. the Chinese models train heavily on the slop produced by GPT/Claude/Gemini, so they exhibit similar behavior. it was particularly noticeable with original llama, whose base models were much more human compared to the finetunes, which were heavily tainted with GPT slop.

I guess what I'm trying to say is that LLMs are not inherently incapable of writing well. a model trained only on high-quality human data and without safety/alignment brainwash will be far, far more capable than the current ones.

  • ashton314 a day ago

    Author here. I think it is well and good to replace human jobs with automation thereby freeing them up for more creative activities. My favorite appliances are my dish washer, washing machine, dryer, and now my robot vacuum. I think automation is great!

    I see the problem when humans allow machines to start replacing what is intrinsically human: when you offload your creativity onto a machine, when you "communicate" via LLMs, or when you try to assuage loneliness with a computer "friend", you're missing out on vital parts of the human experience.

    • ltbarcly3 a day ago

      Ahh, so you are the one who gets to decide what is a 'worthy' human activity. I'm glad I ran into you here. Humans are meant to be creative? What about dumb people? People with mental disabilities? People who don't want or have the ability or talent to write stories or paint pictures? Their labor was made redundant long ago by machines, and you are fine with that. There were plenty of people who's life consisted of providing labor. They had a role in society, they had a valuable contribution. People's clothes were clean because of them, ditches were dug which saved thousands of lives from malaria. You don't see their contributions as 'worthy', so just replace them so you can save some money. But when the machines start coming for what you want to do with your time, where you find your self worth, now it's a tragedy and it's a choice between the machines and human dignity itself?

      I think a machine might be able to help you come up with a philosophical perspective that doesn't just cast yourself at the pinnacle of human worth.

      • cootsnuck a day ago

        I think you raise a good point. But I also think you're talking past OP's point.

        The post is about what do we lose when we no longer have human intent behind interpersonal communication.

        Nowhere did OP specifically say what labor they were for or against partial or full automation. That's a different conversation that it seems you really want to have.

      • GuinansEyebrows a day ago

        i didn't read any personal self-importance in the post; it's a collectivist argument against replacing human communication with statistically-generated spectacle.

  • jayd16 a day ago

    > did the author oversleep the past several centuries?

    The lesson of the past several centuries is that automation enables humans to do other, more interesting tasks.

    Unemployment would trend to 100% and work hours would trend to 0 but it's just not the case.

  • GuinansEyebrows a day ago

    > did the author oversleep the past several centuries?

    did you? this is one of the central points argued by luddites since the industrial revolution began.

    > I guess what I'm trying to say is that LLMs are not inherently incapable of writing well.

    i don't know if that's quite the argument the author is making; more that LLMs are inherently incapable of producing content of value (subjective, but I tend to agree).

    the adage i've heard that i like quite a bit is "if it's not worth writing, it's not worth reading".