www.eamoncaddigan.net

Content and configuration for https://www.eamoncaddigan.net
git clone https://git.eamoncaddigan.net/www.eamoncaddigan.net.git
Log | Files | Refs | Submodules | README

index.md (2782B)


      1 ---
      2 title: "Weeknote for 2025-W31"
      3 description: "A last note (for a while) on LLMs"
      4 date: 2025-08-03T11:28:43-07:00
      5 draft: false
      6 categories:
      7 - Weeknotes
      8 tags:
      9 - LLMs
     10 ---
     11 
     12 Most folks who would ever read this weeknote probably already have a good
     13 understanding of how LLMs work, but [this blog post from Josh
     14 Sharp](https://joshsharp.com.au/blog/how-to-think-about-ai.html) is a great
     15 explainer I foresee sharing with others.
     16 
     17 It reminds me of the first half of this presentation that Emily Bender gave
     18 almost exactly two years ago, but which I only came across recently:
     19 [ChatGP-why: When, if ever, is synthetic text safe, appropriate, and
     20 desirable?](https://youtu.be/qpE40jwMilU?si=bsDYAgDLaKP_ELEq&t=2330). Much has
     21 changed since this was delivered: the hype around LLMs has gained unprecedented
     22 momentum, more people have tried LLM-based chat bots and been exposed to “AI”
     23 search results, and the models themselves and the tools built around them can
     24 do things today that simply weren’t possible then. Yet in spite of this
     25 progress, _none_ of these developments undermine the substance of Prof.
     26 Bender’s critique.
     27 
     28 I link to a section toward the end of the presentation (38 minutes and 50
     29 seconds into it), where---after explaining how these models work and the
     30 fundamental limitations of the technology---she finally outlines some “safe and
     31 appropriate” uses of synthetic text[^ethics]. If you don’t have time to sit
     32 through the whole presentation, consider watching the five minutes that begin
     33 at this point.
     34 
     35 Criticisms of synthetic text echo this 25 year old critique from Edward Tufte
     36 on [The Cognitive Style of
     37 Powerpoint](https://www.edwardtufte.com/book/the-cognitive-style-of-powerpoint-pitching-out-corrupts-within-ebook/).
     38 I suspect that it’s no coincidence that the most enthusiastic adopters of LLMs
     39 at most organizations[^management] seem to be managers, the same people who
     40 would rather watch a slide deck than read a report.
     41 
     42 I’ve already [written about LLMs]({{< ref "/tags/llms" >}}) as much as I care
     43 to, so I promise that this will be my last note on them for a while; unless
     44 anything changes that would obviate the above, I don’t see the need to keep
     45 hammering these points. I’d much rather write about the new (and new-to-me)
     46 things that excite and interest me.
     47 
     48 [^ethics]: Assuming that the ethical considerations about the source of the
     49     training material and energy demands of building the model are also
     50 addressed; i.e., this talk outlines the appropriate uses of a _speculative_
     51 “ethical model”, but _not_ any of the popular models from (e.g.) OpenAI,
     52 Anthropic, Google, or Meta.
     53 
     54 [^management]: This is based on my own experience and conversations with peers; I would love to see data to support or refute this assertion.