commit 17aaabd7b8e4236131830d90079965e1433af5e5
parent 6d2758b7df0e442a432eec00501f1553cc9f5757
Author: Eamon Caddigan <eamon.caddigan@gmail.com>
Date: Sun, 3 Aug 2025 21:09:07 -0700
Add weeknote for 2025-W31
Diffstat:
1 file changed, 54 insertions(+), 0 deletions(-)
diff --git a/content/posts/weeknotes/2025-w31/index.md b/content/posts/weeknotes/2025-w31/index.md
@@ -0,0 +1,54 @@
+---
+title: "Weeknote for 2025-W31"
+description: "A last note (for a while) on LLMs"
+date: 2025-08-03T11:28:43-07:00
+draft: false
+categories:
+- Weeknotes
+tags:
+- LLMs
+---
+
+Most folks who would ever read this weeknote probably already have a good
+understanding of how LLMs work, but [this blog post from Josh
+Sharp](https://joshsharp.com.au/blog/how-to-think-about-ai.html) is a great
+explainer I foresee sharing with others.
+
+It reminds me of the first half of this presentation that Emily Bender gave
+almost exactly two years ago, but which I only came across recently:
+[ChatGP-why: When, if ever, is synthetic text safe, appropriate, and
+desirable?](https://youtu.be/qpE40jwMilU?si=bsDYAgDLaKP_ELEq&t=2330). Much has
+changed since this was delivered: the hype around LLMs has gained unprecedented
+momentum, more people have tried LLM-based chat bots and been exposed to “AI”
+search results, and the models themselves and the tools built around them can
+do things today that simply weren’t possible then. Yet in spite of this
+progress, _none_ of these developments undermine the substance of Prof.
+Bender’s critique.
+
+I link to a section toward the end of the presentation (38 minutes and 50
+seconds into it), where---after explaining how these models work and the
+fundamental limitations of the technology---she finally outlines some “safe and
+appropriate” uses of synthetic text[^ethics]. If you don’t have time to sit
+through the whole presentation, consider watching the five minutes that begin
+at this point.
+
+Criticisms of synthetic text echo this 25 year old critique from Edward Tufte
+on [The Cognitive Style of
+Powerpoint](https://www.edwardtufte.com/book/the-cognitive-style-of-powerpoint-pitching-out-corrupts-within-ebook/).
+I suspect that it’s no coincidence that the most enthusiastic adopters of LLMs
+at most organizations[^management] seem to be managers, the same people who
+would rather watch a slide deck than read a report.
+
+I’ve already [written about LLMs]({{< ref "/tags/llms" >}}) as much as I care
+to, so I promise that this will be my last note on them for a while; unless
+anything changes that would obviate the above, I don’t see the need to keep
+hammering these points. I’d much rather write about the new (and new-to-me)
+things that excite and interest me.
+
+[^ethics]: Assuming that the ethical considerations about the source of the
+ training material and energy demands of building the model are also
+addressed; i.e., this talk outlines the appropriate uses of a _speculative_
+“ethical model”, but _not_ any of the popular models from (e.g.) OpenAI,
+Anthropic, Google, or Meta.
+
+[^management]: This is based on my own experience and conversations with peers; I would love to see data to support or refute this assertion.