Message Boards Message Boards

The Humanities and Wolfram Language: a WTC 2018 follow-up

8 Replies

Thanks for the references.

The afterward to Lakoff and Johnson's book contains a short discussion of a new viewpoint: a Neural Theory of Language. There are some references that are a bit old in such a rapidly evolving field, but I will check them out.

I think that one problem, particularly for people like me doing unfunded explorations, is that the dictionary (word list) in Wolfram Language is too wimpy for anything more involved than "bag of words" computations. I found that the first edition of the OED is out of copyright, and there are PDF scans of the 20 volumes, but extracting the words into something useful is simply too big a task for one person. (The scans are not that sharp, so the error rate would be too hight to accurately automate, in my opinion.) It would be great if Oxford University Press would license the OED to Wolfram, but I doubt that that would happen. It might be possible to cobble something analogous to the usage quotes in the dictionary by using all the out-of-copyright literature that is already in plain text, but I suspect that there would need to be expert curation.

My suspicion is that this will be best another stage in the development of AI. Great things were promised, followed by running into hardware (and possibly conceptual) limitations, which resulted in stasis until the next breakthrough. I am not sure that the current hardware can do a decent job using an 800,000 entry dictionary with the type of added information that the OED has.

The main downside to this is that people may make commercial use of the fruits of the current state of NLP and not realize its (severe) limitations.

Right now, this problem is on the back-burner for me until I can find an economical way to use a GPU to do Neural Nets on macOS.

Good point, George! Coincidentally, I have just found out about another strange dysfunctionality of NLP - even more alarming to me as it is much more primitive operation. Current state of the art word embeddings cannot distinguish between synonyms and antonyms. This is perplexing to me as such ability is the basis of semantics. Some special work going in this direction but it is not a commonplace.

https://www.aclweb.org/anthology/N15-1100

http://propor2016.di.fc.ul.pt/wp-content/uploads/2016/07/BrunaThalenbergPROPORSRW2016.pdf

http://tcci.ccf.org.cn/conference/2018/papers/141.pdf

POSTED BY: Vitaliy Kaurov

More follow-up. I found this very informative book:

Metaphors We Live By, by George Lakoff and Mark Johnson. I think anyone interested in natural language processing will have to deal with the points raised in this book. I think that if the use of metaphor in everyday language is not handled properly, then "natural language" will become yet another technical term that means something close to the commonplace idea, but sufficiently different to render any insights generated questionable.

I have some ideas about how to move from "bag of words" analysis to something a bit more useful, but I have to test them before presenting them.

Posted 6 years ago

George, I wonder how your concerns can be addressed by the Cyc Project run by Doug Lenat of Cycorp (www.cyc.com) -- the longest continuously running project in AI? Stephan and Doug worked together when they were younger. Stephan said some years ago that before Alpha was announced, he connected with Doug to say that Alpha wasn't intruding upon Cyc's turf. Doug gave a thumbs up to Alpha.

The Cyc Project's goal is to endow automated reasoning with common sense. It splits human knowledge into diverse microtheories, each equipped with the modes of reasoning, discussion, idioms, etc. of that particular area of knowledge. There's a microtheory for mathematics, religion, organizations, physics, chemistry, fantasy, etc. His experts range from historians to mathematicians to comp scientists to Airborne Rangers. I wonder how this content could be tapped by Alpha (it may be worth asking Wolfram if they've investigated any potential WL-Cyc "collaboration").

Bob, we share a common perspective on history. I've started working on a WTC2019 presentation that discusses cliometrics (the mathematical modeling of history) -- let's see if the abstract flies. As you may recall from his WTC2017 talk, Kuba Kabala (Davidson College) does textual analyses (using WL) of documents from the medieval papacy. Hopefully he'll have another blockbuster for WTC2019 (Kuba, you still have an open invitation to talk to the Washington DC-Area Mathematica Special Interest Group when it relaunches at The Catholic University of America in the Spring).

POSTED BY: Bruce Colletti

Thanks for the comments. I originally made this post to gauge interest in this topic.

Tools can only measure what they are designed for. I see far too much quantification bias in the world. One glaring (and dangerous) example is the insistence on "measurable outcomes" to determine the effectiveness of education. The most important things i life can't be quantified, yet we are determined to make numerical surrogates for what we are interested in, or should be interested in if we are going to have a thriving civilization.

I recently purchased Ursula K. LeGuin's "The Books of Earthsea", an omnibus edition of books I have been reading since the first one came out in the late 1960s. One of the great themes of the books is that of balance, which is hardly surprising, given LeGuin's interest in Taoism.

Balance, in this sense, in not something that can be approached 'scientifically'. Yin and yang, or "mythos" and "logos" (in the Greek tradition, although we need to understand the original meanings of these terms) are important concepts, and the emphasis on the one over the other is one thing that has generated problems. It has been said that the rise in fundamentalism was the direct result of the success of science, in that it appeared to devalue the "mythos" that is an essential component of life.

Mathematics is as much an art-form as it is a technology, and it is important to keep this in mind when building tools. Philosophy is full of wrong ideas and dead ends, but it serves as a balance on the tendency towards technological manifest destiny.

There are simply too many unrecognized assumptions in a lot of what I have been reading (and doing -- I'm as guilty as anyone) about the use of technologies such as Mathematica.

Parenthetically, I should also point out that the assumption that because someone attended theological school, one must be some kind of theist is likely to be wrong. Curiosity can lead all of us into some pretty strange areas, yet without it, we are prone to dogma

I believe that it is possible to unify both the yin and yang as they are in the ?? symbol. It's too bad that there is not a similar symbol that can unify mythos and logos.

all for now....

George,

Your points are well taken, especially wrt computation being used to create art, music, or literature.

My own interests, though, are to use computation as a tool to study the humanities. My K-12 experience with history was as you described, and now with some years of experience in the real world I appreciate much better the need to study the motivation behind historical events and the trajectory of events and people that led up to the event of interest.

I liken the use of computation to help inform the study of the humanities much like that of a magnifying glass, microscope, electron microscope, ... making it possible to observe biology in every greater detail. I will still use my own mind to make conjectures and draw conclusions. The Eureka! moments are rare, but all the more rewarding because I was the one connecting the dots the algorithm helped me find.

Bob

POSTED BY: Robert Nachbar

Fair enough, and I was using "understand" metaphorically.

What I see happening, though, is that events (not problems, as such, really) are being redefined so that the new shiny tools can be applied to it. Even though positivism is not the strong influence it was mid-century, it is still around.

So what we are seeing, in my opinion, is that the Machine Learning is becoming sophisticated in handling an impoverished problem. It is one of the ways cargo cult science evolves from real inquiry.

The humanities people, for the most part, will see right through this.

For what it's worth, my talk was popular with a small set of conference attendees, all (?) of whom had some connection with the arts and humanities, in addition to being Mathematica users. For the rest of the conference, these ideas are pretty much below the RADAR.

It's really too bad, because I can see that the STEM people really need the help of the humanities if they are to avoid pursuing blind alleys.

Posted 6 years ago

Interesting to see philosophy posted on a Software applications forum. I am reminded of the Chinese puzzle box analogy used by Roger Penrose which posits a group of Chinese inside a box who use a computer algorithm to pass a Turing test in English successfully even though they have no knowledge of English, and hence no understanding of the questions posed by the examiner.

To say AI understands something is false because it is merely reacting to stimuli. However I believe those reactions can become more and more sophisticated to the degree that it will eventually become indistinguishable from the reactions of a human.

POSTED BY: BJ Miller
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract