Group Abstract Group Abstract

Message Boards Message Boards

7
|
35.1K Views
|
31 Replies
|
48 Total Likes
View groups...
Share
Share this post:

Why are some professors negative on Mathematica?

Posted 11 years ago

In discussions with several university physics professors,,,theoreticians...they profess a strong dislike for Mathematica and caution their students about using it. I can fully understand cautioning a user to make sure they have used the correct syntax or correctly formulated a problem or model, but their caution was stronger than that. They, in effect, prefer to roll their own algorithms. They also claim Mathematica gives bad results, though it was unclear whether the fault lay in the execution or the formulation of a problem.

I am troubled by this attitude, since users in all technical disciplines use Mathematica and rely on it to supply solutions to various designs, models, and analyses, some that are mission critical.

I performed a literature search on evaluation of Mathematica and the most recent published critique and evaluation I found was for Mathematica 5. Other than this, there does not appear to be an undercurrent of suspicion except from these specific profs.

What is going on? I can defend my mathematical models but I cannot defend the outcomes of executions of these models if there is skepticism over the validity of solutions obtained by Mathematica. I can also understand that anyone who has not come up on the learning curve might simply be covering their own inadequacies, hence the attitude of rolling their own. However, everyone should be skeptical of published results from the use of personal algorithms, for which no validation or user community exists.

How can the quality of the results from the use of Mathmatica be supported? Are there published evaluations? What might some organizations such as DARPA do to validate some work for which Mathematica has been a cornerstone of the analyses?

Or am I the only person to have run into this level of skepticism...which is ironic since I am more skeptical of analytical results than most.

POSTED BY: Luther Nayhm
31 Replies

Something that I learned years ago (very likely from Feynman or quoted about him to me by someone else) is that, when looking at a dataset in a presentation and listening to the presenter's description of the meaning of the data, in your mind drop the data points from the graph at its extremes and decide if the conclusions make sense. Why? The data at the extremes are often there because the experiment stopped being trustworthy at roughly near those limits so the strength of those points often would be in question....

POSTED BY: David Reiss

Some comments on "attitude" and "culture". One of the greatest influences on me have been the books on data graphics by Edward Tufte. It is one reason I believe in writing literate notebooks with textual explanations and multiple and carefully designed presentations. Tufte writes: " Those who discover an explanation are often those who construct its representation." Mathematica gives you plenty of tools for constructing representations and exploring their behavior.

And why textual explanation? Because if you can't give a simple clear explanation it should leave you a bit skeptical as to your level of understanding. The same goes for documentation of a routine. If it's difficult to document maybe it should be redesigned or scrapped for some other routine. Textual explanation and documentation are not onerous chores, they are part of the learning or creative process.

I tend to eschew the terms "code" and "programming" in favor of terms such as "writing definitions, axioms, rules and specifications". After all, we're trying to do math and science. WRI can hire the programmers. It's a matter of attitude and culture.

Lots of people in academia write papers and even books as part of the process of better learning the material. Do you think they always knew it all before they started? So even if it's self-study, writing literate notebooks is good practice. And they might very well be good enough to interest other people. The same applies to having students write literate notebooks, even if they are relatively short. When they're finished they actually have something to show off. Science and technology are no good if they can't be communicated.

Also, if students can design new and clear presentations along with explanations, they are not only learning but they are adding value because these do not always exist together now.

And thanks to Luther for starting this discussion.

Posted 11 years ago
POSTED BY: Bill Simpson

If results had error bars, and regression analyses showed the correlation coefficient, a lot of what passes for "knowledge" in Sociology, psychology, and medicine would be seen as being very tenuous at best.

I think that this is a major cultural problem. No one gets through Physics 1 or any decent engineering course without learning that any measurement that lacks an estimate of error is useless. I used to work for a clinical lab, and the entire notion knowing what the error bars for a result were was antithetical to the thinking both of the management of the company and the clients.

As I recall, Tufte does discuss error, and I believe that there is a section where he shows radically different sets of data give identical least squares fits. However, this is not the main thrust of his discussion, as you point out.

This whole topic is overdue for a treatment that can be understood by people who just want to use software, and not have a deep understanding of numerical analysis. I'm pretty sure that Mathematica is a good tool to use for this.

Posted 11 years ago

There was an article published recently, but I can't find it here. Someone in the social sciences at a university was accused of cooking his results. Both sides are now claiming that they are blameless and the other is at fault. Somehow a side effect of this was a substantial effort to carefully replicate a few dozen major long accepted classical results in the field. Approximately one third of the replications either failed to support the hypothesis or had sufficiently serious problems with the experiment that no claim could be made. In every case mentioned the original experimenter is furious that they have been targeted by a witch hunt, their reputation has been permanently damaged, they have no way of defending themselves and nothing can be done to make up for this. You should track down the original source of this before using it, just to make certain that I have not somehow skewed the claims. It might have been published in Science in 2013, but I can't be sure.

If the likes of Tufte's standards could begin to make it the norm that uncertainty be clearly shown with every claim and Mathematica could position itself as one of the leading tools to simply and easily produce graphics for claims then, as mentioned above, perhaps everyone might be in a slightly better position to see at a glance what has sufficiently small uncertainty to likely be accepted and what has so much uncertainty that it can be dismissed.

POSTED BY: Bill Simpson
Posted 11 years ago

The quintessential example that comes to mind for me is the Fleischmann–Pons claim of cold fusion.

http://en.wikipedia.org/wiki/Cold_fusion

POSTED BY: Steve M

Bill, that seems a bit off topic of whether academics are missing out by not making greater use of Mathematica. But maybe not completely. Because of their active and dynamic character, Mathematica provides the opportunity of presenting higher quality results with higher integrity. One can include, in the document, raw data and procedures for analyzing and representing the results. Others can check if they actually work or have entry points for exposing problems. If people want to lie about data I suppose they can but it's not quite as easy to get by with just plain sloppiness.

I once worked as a computer consultant for a group of biochemists. At one point I was asked to look at an experiment one of them was doing to measure the rate of some biochemical reaction. The person was very nice and very meticulous in his biochemistry. But when I looked at the reaction system and what he was measuring I discovered to my horror that it had no relation at all to the reaction rate he was trying to measure. Needless to say, he was not pleased with this. I left the group but later learned from someone who replaced me that they continued the experiment and analyzed it with a statistical program. The result was something like 1.0 +/- 10^6 millimole/minute - and they published it!

Bill, Bravo. Very well articulated. Thanks.

Sadly, careful error propagation, null hypotheses, standard error and sample sizes, p values, etc are disappearing from the canonical topics of a physical science education, and many papers appear in press with no discussion of error and uncertainty.

The typical excuse for omission is "curriculum pressure". I think we are just not teaching effectively.

POSTED BY: W. Craig Carter

Bill, your remarks are well put. I think that Tufte somewhere does have a maxim to Be Honest. And he does give several suggestions in Visual Explanations page 34. This might include multiple presentations based on different measures or viewpoints and textual explanation that might address various concerns. That's why I'm an advocate of literate notebooks that make a case and try to clarify and present a true picture. I would be very skeptical of any analysis summarized in a single presentation without extended discussion. In the final analysis it largely depends on the integrity of the person making or responsible for the analysis.

Mathematica probably has the tools to present uncertainty and risk. You just have to figure out how to put them together. With a dynamic presentation you can provide information in tooltips. You can provide information in numerical side reports, which may vary as you move a locator about the primary data representation. You might be able to toggle a presentation between various assumptions. And of course there are the old fashioned error bars and multiple presentations. Your suggestion of fuzzing the data representation might work very well. To go one step further, in a Mathematica notebook (or application) you can give the reader active tools for performing various types of data reduction and the actual data itself. I'm not knowledgeable about statistical analysis of data; I'm sure you can work out much better methods. The point is: Mathematica does give you better tools to present honest results.

Posted 11 years ago

Dusty trails indeed, and in my case, of the handful of engineering companies I've worked for, if I wasn't the only Mathematica user, there may have been 2 or 3 others that let it collect, you guessed it . . . dust. For some reason Matlab/Simulink dominates the engineering world and Mathematica is more popular in science, education and maybe finance. One area of computation that is bound to continue to explode is medical research. I wonder if Mathematica played a role in cracking the human genome.

Macsyma over a 300 baud modem pre-dates me somewhat but nice job in catching that error. I do remember running test cases with it from my CRC Standard Math Tables handbook and having much difficulty interpreting the results that Macsyma produced on my bulky, heat producing Cathode Ray Tube. Of course Mathematica has FullSimplify and similar commands to push results into a more human readable form.

On the topic of supportive learning materials in the 90's I subscribed to Mathematica in Education and Research put out by Telos/Springer. I found this journal to be highly readable and useful as I don't have advanced degrees. It was full of plots, images and code. Thinking back, I learned quite a bit from that little journal as it was very practical in nature.

Does anyone know if this or similar journal is still in print ? I'm aware of the Mathematica Journal, but I've found that to be too theoretical and advanced for my needs.

POSTED BY: Steve M

I first used Macsyma over a 300 baud modem over the Arpanet from Caltech to MIT back in the mid 70s. And I found a bug. They had the general solution to the cubic entered incorrectly.

POSTED BY: David Reiss

That was possibly a bug, so to speak, in the CRC reference at the time. It did not really account for choices of roots used, at least in the quartic formula. This may also have been an issue with the cubic. This was stuff I had to work through in 1992.

POSTED BY: Daniel Lichtblau
Posted 11 years ago

"All of the manuals on how to use Mathematica have been written by people who know too much. Their work should be edited by someone like me who is trying to learn what the developers already know."

Luther, I find your basic thesis here by and large accurate. Mathematica does have a steep learning curve and to directly use the more advanced features (particularly the various programming paradigms) one has to think like Mathematica, which for me has not come naturally.

Like others here, I started off with FORTRAN punch cards in college and was simply amazed at what could be done with the "simple" Do Loop. Ten or so years later I stumbled upon Macsyma, anybody remember that ? It was a command line math tool that claimed it could do symbolic math, maybe it could but I did not find it useful. Then I noticed that our Technical Librarian had a copy of Mathematica sitting on her shelf, version 2.2. After inquiring about it I learned that one of the Chief Engineers owned it originally but returned it to her because he couldn't figure out how to use it. I convinced my company to purchase the Premier Support package which offers supurb technical support and over the years would amaze him (and myself) of what could be done with this tool. I also read a few Mathematica books which I enjoyed at the time but that didn't boost my usability knowledge as I had hoped. During and prior to these years I became somewhat proficient with FORTRAN, MathCad, Excel, BASIC, Minitab and Matlab. And from my experience only Matlab comes close to the power of Mathematica if various toolboxes were acquired. Out of the box, Mathematica was a clear winner for my technical computing and data processing needs. And when serious statistics were added (ver 8 I think) there was no need for any other tool.

Fast forward to today, I am still a clunky Mathematica user and I still rely on the basic Do Loop from yesteryear even though much more efficient programming techniques have been available in Mathematica for years. But I've done some pretty neat stuff that no one else in the company has been able to do.

I share the learning curve frustration but I will take a hard to use yet capable tool over an easy to use limited tool any day. And for my technical computing needs, Mathematica has never let me down in the sense that if I could conceive it, Mathematica could be coaxed to solve it; that is powerful.

On a practical note, the approach I have found very useful is to create and maintain a private library of "How To Notebooks" that describe a narrow task that you know you will be facing again in the future. The source material for these may come from the hard slugging and self-learning that you have produced, a call to Tech Support , the online Help, books, or the Mathematica StackExchange site. If I look at my "How To" Folder I see file names like

How to align entries nicely using TableForm.nb How to compute running averages.nb How to conditionally extract values from a list.nb How to convert time formatted strings to numerical values.nb How to control minor ticks on a log axis.nb How to create a 3-D surface plot.nb How to extract algebraic solutions produced by the Solve command.nb etc.

I find this is far more useful that trying to rely on the built-in Help which, in general, offers a very simple example or two (material that you already know) and then jumps to examples so complex that [literally] some Tech Support Engineers at Wolfram have had some difficulty with, and these guys are good !

Kudos to Stephen Wolfram and his team for inventing, enhancing and maintaining what I think is the strongest technical computing environment on the planet. The professors mentioned at the beginning of this discussion don't know what they are missing.

For an example, that I have put to use in my own work, Google and download the Sandia report Sample Sizes for Confidence Limits and Reliability, by J. Darby.

POSTED BY: Steve M
Posted 11 years ago

We have walked the same dusty trail. I had forgotten about Macsyma.

I do some of the things you suggest, but in a less ordered fashion. I have several worksheets that I have input various solutions and approaches to problems, and I just keep using the same worksheets. My memory is good enough that I remember that they are in the worksheets and I open one and start scrolling. I call it my cluttered desktop approach.

I second your admiration for Wolfram. And I will look up the reference you suggested. Thanks.

POSTED BY: Luther Nayhm
POSTED BY: W. Craig Carter

I basically use Mathematica for all my work unless otherwise forced asunder. I write all my documents in it and have done since around version 3 (when the typesetting became publication class).

POSTED BY: David Reiss
Posted 11 years ago

Clearly, that is something I have to start doing.

POSTED BY: Luther Nayhm

I think it was either Planck or Boltzmann who said that progress in Physics is made at funerals. You advance a new theory or process and wait for the old physicists to die. Having a Ph.D. in Physics myself, I'm allowed to say this. It's probably true for almost all fields of human endeavor.

POSTED BY: Frank Kampas
Posted 11 years ago

I had heard of it as a Planck quote, but I have also heard that it was a quote of a quote. It is like Newton's standing on the shoulders of giants. I think that is a quote of a quote, too. The more modern version of Planck's quote is about progress or acceptance occurring one funeral at a time. Planck's quote was kinder and about acceptance occurring through the younger generation being more familiar with whatever is new and novel and having grown up with it. I am always reminded of the idea of continental drift and how hard it was for the geologists of not so long ago to get their heads around the idea despite the circumstantial evidence. Alvarez's asteroid extinction hypothesis is still being tested...as it should be.

I can understand the push back if it were something truly novel, but in my case it was a redo using more advanced techniques available through Mathematica. The approach was simply a re-reading of something and realizing there was a disconnect between what was said and what was actually done. I am of the opinion that everything should be periodically revisited in light of newer findings, insights, and techniques.

When I started my career, I was performing contract R&D and we were selling applied physics and technology. I remember plowing though the journals to get ideas for applications, since I knew what my customers' problem areas were from a technology perspective. If something was novel, I tested it for applicability to various applications. Got some things so wrong, but got some very right, too. It was a fun time to get into the technology business.

The times were different. We took chances. Even a dumb idea might have a nugget of goodness buried somewhere inside.

POSTED BY: Luther Nayhm
Posted 11 years ago
POSTED BY: Luther Nayhm

Luther, why don't you write me at djmpark@comcast.net and tell me what your project is.

Posted 11 years ago
POSTED BY: Bill Simpson
POSTED BY: Daniel Lichtblau
Posted 11 years ago
POSTED BY: Luther Nayhm

Dear Luther, I've been puzzling over your observation for nearly 15 years, and I also fail to understand why Mathematica/Wolfram Language gets dismissed by my colleagues so haphazardly. In discussions with my colleagues about this question, I tend to go in listening mode and don't advocate on one side or another. I'll try to summarize what they say below.

Some background:

I'm a professor at MIT in physical sciences and engineering. I've been teaching a course on mathematics and problem solving in materials science for about twelve years, and I use Mathematica extensively in my course. I can't judge the efficacy of my course objectively, but my students generally have very positive evaluations; I've been given MIT's highest institutional teaching awards, as well as the School of Engineering's. I believe that these awards reflect the quality of the Mathematica based course. I give nearly all of my research presentations using Mathematica and these are received well (however, such feedback tends to be biased towards positive)

The summary (I don't agree with many of these, and am still honing my counter arguments)

1) Mathematica is not as fast as X (X may or may not be a compiled language).

2) Mathematica's syntax creates too steep of a learning curve.

3) Our (particular) scientific community uses X; so there are many more routines and working examples in X

4) I am already using X, why should I change?

I believe most of the above are the result of "user inertia".

I don't hear the comment that Mathematica gives "bad results" so much now. I doubt if many faculty roll their own algorithms, but are probably thinking back to when they were postdocs or graduate students--however, there are happy exceptions to this

I believe that the best way forward is to offer students a choice; and hope that they will make an unfettered choice of a their preferred programming language. However, I am not so optimistic: a student who went to graduate school at a university down the street wrote to me saying, "I'm taking this class and they insist on using X and I asked if I could use Mathematica instead, and he said no. We were doing more advanced things in 3.016 ( the sophomore class I teach)."

POSTED BY: W. Craig Carter
Posted 11 years ago
POSTED BY: Luther Nayhm
POSTED BY: Daniel Lichtblau

Hello Danny and Luther, That makes three of us who started with punch cards.

A bizarre as it might seem, I think punch cards were a beneficial way to learn to program. I spent so much more time thinking about the program and doing thoughtful debugging because submission of a job was so painful. I confess that I often do debugging now with less reflective thought because I can do dozens of haphazard experiments in minutes--this probably speeds the debugging process as often as it slows it down.

I was sold on symbolic computation (pre-wolfram-language) when I was able to solve classical mechanics homeworks faster and more accurately than my classmates. I remember that just being able to do a taylor expansion around a point and copy down the results and redraw crt-rendered plots into my homework felt like cheating.

I also remember my own "inertia" of switching from "symbolic-program-x" which was free at Berkeley (and had opaque syntax that I had learned moderately well) to Mathematica 1.

Craig

POSTED BY: W. Craig Carter
Posted 11 years ago

Ah, memories!

POSTED BY: Luther Nayhm
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard