How to tell if there is side effect?
There are two possible questions here. I'll answer both. If you're writing a function, and you want to make sure there are no side effects, then you must make sure that your function references only those variables that are passed in as arguments. If your function reference any other variable, then you have a side effect. This can be tricky if your function references another function, because that function might reference other functions. So, for example, if your function uses RandomInteger
(or any of the random functions), then your function has a side effect, because RandomInteger
depends on a special system variable. You could "fix" this by using SeedRandom, but then your function won't be random any more. In this case, you just accept the side effect. There might be other subtle ways to create side effects, but referencing non-argument variables is the main way.
The second question is how to tell whether a function that you didn't write but want to use has a side effect. Well, you'd have to inspect the definition of the function and understand it (and the functions that it references) well enough to determine whether side effects exist. And this is the problem with side effects. You may not be able to inspect the function defintion, or the side effect may be subtle. This is why side effects are so pernicious. Unexpected side effects are the cause of many, many software defects.
What is the difference between [MySquare3 and Function[countUsages += 1; #^2]]
Effectively there's no difference. The exact same computations occur. But the process for getting there was slightly different. To evaluate MySquare3[1]
, the evaluator needed to search the list of DownValues (among other things) for something matching that expression. If it finds something, it will do a replacement. In this case it would have found
HoldPattern[MySquare3[x_]] :> (countUsages += 1; x^2)
and so would have done the replacement to give
countUsages += 1; 1^2
which it would then proceed to evaluate further.
To evaluate Function[countUsages += 1; #^2][1]
, the evaluator didn't need to go looking for DownValues (or any other type of replacement). It just immediately slurped the argument into the function body to give
countUsages += 1; 1^2
It's the exact same result, just a slightly different path to get there.
what is MySquare3 /@ Range[3] ... why can you omit the [#]
This is just how this syntax is defined. The Map
function, aka /@
, applies its first argument to every element of the second argument at the first level. So, given a function f
and an argument x
, the word "apply" means "construct the expression f[x]
".
You asked about MySquare3[#] /@ Range[3]
. That's a syntactically valid expression--it just doesn't do what you want.
MySquare3[#] /@ Range[3]
(* {(#1^2)[1], (#1^2)[2], (#1^2)[3]} *)
To make this work with slots, you need to use Function:
MySquare3[#] & /@ Range[3]
(* {1, 4, 9} *)
This works, but the Function
is superfluous. To evaluate MySquare3[#]&[1]
, we pull the argument into the body of the function to get MySquare3[1]
, and now we're right where we would have been had we just started with MySquare3 /@ Range[3]
.