Several observations:
I absolutely love computational essays as a means to host lectures. They should be used everywhere–for all sorts of online presentations. I loved the example here on Lecture #4 of projecting the "shadow" of a multidimensional array onto a 2D surface. That provides a perfect model for what happens when we make puns: there is some orientation of projection of a multi-dimensional word-space where the "shadow" has particular words line up next to each other. There are many different kinds of puns–just like there are many orientations that one could project the shadows of a word-map.
Indirectly, this course has some profound commentary about learning. In the context of training a neural net, errors are never a problem. Errors are simply a way to re-jigger the neural network to behave better in the future -- to train to a local minimum. Maybe the most important strategy for learning is seeking out strategies-for-possibly-making-errors, noticing when you make errors, and letting the neural network rewire itself as appropriate. We're so obsessed with avoiding [public] errors; that's really silly.
The commentary about what problems AIs cannot solve were as valuable as anything else in this course. I like how Stephen went back to his computational automata to explain this.
Word puzzles and number-puzzles are interesting. When playing knotwords, it's useful to know vowel-consonant combinations and groups of 3 letters. At the same time, that's insufficient to do well at the game. You come face-to-face with strange rules: "if I have guessed a 8-letter word, it must be right". I've gotten better over time, but I couldn't really say how.
Wolfram Research has a compelling value proposition with LLMs + the Wolfram Alpha + the Wolfram Language. It's uncanny how much Wolfram Research's past investments in a vast library of human knowledge are the perfect partner for LLMs. Understanding the shortcomings of LLMs and why they are contained with judicious use of the Wolfram Assistant is an important thing to explain to others.
The Q&A that Arben published from this course are tremendous. Arben's comment that code functions more like a natural language than an algorithm are intriguing. I am paraphrasing his comment; I'll have to scrutinize his written answer.