Message Boards Message Boards

29
|
202978 Views
|
379 Replies
|
674 Total Likes
View groups...
Share
Share this post:

New Functions I would like to see in future Wolfram Language versions

Posted 11 years ago
I was wondering, it would be interesting to try to use the community as a way to request new functions that could be incorporated into new versions of Wolfram Language, in a colaborative way. Sometimes users simply don't have the whole/deeper system view, to understand that the requested function is or too specific, too broad or already implemented, but I believe that another times we can have a nice insights, that Wolfram Research people haven't yet, or that do not have received much attention  yet.  To the idea is:

Post your's requested Function as a answer to this question, and let's upvotes show the more interesting ones!

Some rules
1- One Post per function (or class of function), you can have more than one request.
2- Exemplify your function use.
POSTED BY: Rodrigo Murta
379 Replies

A new special function: Srivastava-Daoust function for N variables.

Srivastava-Daoust** You can use this function to express:

1.All four Appell function

2.Horn Function -34 distinct convergent hypergeometric series of order two enumerated by Horn.

3.Kampé de Fériet Function

4.Lauricella Functions

5.MacRobert's E-Function

6.Humbert’s series

7.Hypergeometric functions of one,two,triple,4 and n variables

8.Srivastava hypergeometric functions in three variables,

9.And many many more....

Srivastava-Daoust function defintion for 2 variable.

Srivastava-Daoust function defintion for N variable.

POSTED BY: Mariusz Iwaniuk

Mathematica offers a great number of neuron nets for image, but can we have some neuron nets for satellite image (remote sensing) Thanks

POSTED BY: André Dauphiné

GeoImage[image,proj,bbox] and GeoArray[image,proj,bbox] objects. These would be rectangular datasets (arrays or images) bundled with a projection and bounding box info (in that projection). GeoImage and GeoArray objects would then be used as GeoGraphics primitives, being placed and reprojected automatically.

POSTED BY: Gareth Russell

GeoTIFFs would import by default as GeoImage objects. And Wolfram could use the format in its curated data, as well as hosting a repository of user-contributed GeoImage objects.

POSTED BY: Gareth Russell

The raster reprojection code must already exist, because it is done with the satellite imagery and other GeoGraphics background options.

POSTED BY: Gareth Russell

I'd like to see a RegionPlot3D in cylindrical and spherical coordinates

Unfortunately, it appears to work only for Cartesian coordinates.

POSTED BY: Mariusz Iwaniuk

I'd like to see a q-Calculus like :

1.q-Derivatives

2.q-Integrals

3.q-Exponential Functions

4.q-Sine Functions

5.q-Cos Functions

6.q-BETA Functions

7.q-Bernoulli Polynomials

8.q-Euler Numbers

9.q-Stirling Numbers

10.q-Orthogonal Polynomials

11.q-Appell Functions

and many more from: q-Calculus , Wiki and MathWorld

POSTED BY: Mariusz Iwaniuk

It's been 6 years when was the last time I wrote a post about hilbert transform and we still don't have.

The hilbert transform, sometimes called a quadrature filter, is useful in radar systems, single side-band modulators, speech processing, measurement systems, as well as schemes of sampling band-pass signals.

It's time to change that.

Regards M.I.

POSTED BY: Mariusz Iwaniuk

I feel like this would make an excellent contribution to the WFR!

As an added bonus, WFR functions are sometimes nominated for inclusion in future versions of Mathematica! Every so often, the resource system team gets together and discusses recent WFR submissions, taking special note of implementations that would make good system-level functions.

The Hilbert transformations are of widespread interest because they are applied in the theoretical description of many devices and systems and directly implemented in the form of Hilbert analog or digital filters (transformers). Let us quote some import- ant applications of Hilbert transformations:

1. The complex notation of harmonic signals in the form of Euler’s equation exp( jvt) ¼ cos(vt) þ j sin(vt) has been used in electrical engineering since the 1890s and now- adays is commonly applied in the theoretical description of various, not only electrical systems. This complex notation had been introduced before Hilbert derived his transformations. However, sin(vt) is the Hilbert trans- form of cos(vt), and the complex signal exp( jvt) is a precursor of a wide class of complex signals called analytic signals.

2. The concept of the analytic signal11 of the form c(t) ¼ u(t) þ jv(t), where v(t) is the Hilbert transform of u(t), extends the complex notation to a wide class of signals for which the Fourier transform exists. The notion of the ana- lytic signal is widely used in the theory of signals, circuits, and systems. A device called the Hilbert transformer (or filter), which produces at the output the Hilbert trans- form of the input signal, finds many applications, especially in modern digital signal processing.

3. The real and imaginary parts of the transmittance of a linear and causal two-port system form a pair of Hilbert transforms. This property finds many applications.

4. Recently two-dimensional (2-D) and multidi-mensional Hilbert transformations have been applied to define 2-D and multidimensional complex signals, open- ing the door for applications in multidimensional signal processing.

Bibliography

Bibliography1

Bibliography2

POSTED BY: Mariusz Iwaniuk

I'd like to see a new special functions:

1.AppellF2, AppellF3, AppellF4 functions.

2.Kampé de Fériet Function

3.Horn function

4.Lauricella Functions

5.MacRobert' s E function

6.The Multiple Zeta

7.[GeneralizedPolylog and MultiPolylog][7] represent the function class consisting of generalized polylogarithms, multiple polylogarithms, harmonic polylogarithms, hyperlogarithms, and related functions.

8.FoxH -Function of Several Complex Variables

References

[1] A.B.Goncharov. "Multiple polylogarithms, cyclotomy and modular complexes", Math Res.Letters. Vol. 5 (1998): 497-516.

[2] Jens Vollinga, Stefan Weinzierl. "Numerical evaluation of multiple polylogarithms", Comput.Phys.Commun. Vol. 167 (2005): 23 pp.

[3] H. Frellesvig, D. Tommasini, C. Wever. "On the reduction of generalized polylogarithms to Lin and Li22 and on the evaluation thereof", JHEP 1603 (2016): 35pp

[4] Generalized Hypergeometric Functions with Applications in Statistics and Physical Sciences

POSTED BY: Mariusz Iwaniuk

I'd like to see a new special functions:

1.Humbert series

and All from this list that dosen't exist in Mathematica.

Update 2024.01.08

Humbert series can be expressed by Lauricella Functions,then we need only Lauricella Functions

Regards.

POSTED BY: Mariusz Iwaniuk

Multivariable hypergeometric functions (such as the famous Appell, Lauricella and Kamp´e de F´eriet functions, etc.) and their various generalizations appear in many branches of mathematics and its appli- cations. Many authors have contributed works on this subject. In recent years, several authors have considered some interesting extensions of the Appell and Lauricella functions. Motivated by their works, we introduce a class of new extensions of the Lauricella functions and find their connection with other celebrated special functions.

Lauricella, G. in 1893 defined four multidimensional hypergeometric functions FA, FB, FC and FD. These functions depended on three variables but were later generalized to many variables. Lauricella’s functions are infinite sums of products of variables and corresponding parameters, each of them has its own parameters. In the present work for Lauricella’s function F(n) A , the limit formulas are established, some expansion formulas are obtained that are used to write recurrence relations, and new integral representations and a number of differentiation formulas are obtained that are used to obtain the finite and infinite sums.

The great success of the theory of hypergeometric functions of a single variable has stimulated the development of a corresponding theory in two or more variables. Multiple hy- pergeometric functions arise in many areas of modern mathematics, and they enable one to solve constructively many topical problems important for theory and applications.

A generalization of the Fox H-function is given by Ram Kishore Saxena. For a further generalization of this function, useful in physics and statistics was given by A.M.Mathai and Ram Kishore Saxena.

See here: and here and The H-Function Theory and Applications see Appendix in this book.

POSTED BY: Mariusz Iwaniuk

Quite often I would like to be able to simply write:

func @@@@ matrix
POSTED BY: Henrik Schachner

Clock in Mathematica is buggy (submitted bug report) and is very limited in what actions it can do. If its set as a recurring countdown clock sometimes it performs my action (i.e. NotebookSave[] & Ding[] to alert me) and at other times nothing even though i am not overloading the kernel stack or frontend with heavy processing. I really hate that it immediately starts when created instead of having the ability to define an event or function to start, pause, stop, resume and evaluate any expression ir combination of. There has to be a more efficient way of updating values when using clock without checking update interval every second in dynamic or whatever one usually sets too. for example an internal notify system that is set to only check once per cycle. It should have settable custom properties that can changed according to time passed. I realize i just about described taskobjects but it is not what i meant at all.. i want a local object that is easily configurable to show me time running to next evaluation (when in dynamic). i almost got a countdown clock to work in a button except thats when i discovered few bugs. countdown doesnt even wait for Button to click to start countion down and continues to countdown in dynamic even when scrolled out of view.

In short I want a clock i can actually work with not one that only halfway works like what we have now.

POSTED BY: Jules Manson

Not sure if this can be done with something like LocalSymbol or LocalObject but I would like to see persistent functions so that they could easily be overloaded at start of kernel or session for example to load packages in a very easy and direct fashion. Such a function might look like this...

loadPackages[args]=PersistentFunction[name,args, type->loc]:=Function[{symbols}, do something with args]

where the Head (Function) may also be any other applicable Head like Module, Block, With, etc. Even better would be to make a function with multiple definitions perhaps just set PersistentFunction to the SymbolName in place of defining it.

For example if we defined several different cases of loadPackages we could store only the function name as a persistent symbol which could be used immediately

loadPackages[args/;cond1]:=Function[{symbols}, do something with args];
loadPackages[args/;cond2]:=Function[{symbols}, do something else with args];
loadPackages[args]:=some error message;

PersistentFunction[name,args,type->loc]=loadPackages
POSTED BY: Jules Manson

The new Video function is very cool! But I miss auxiliar functions like in Audio.

In Audio we have: AudioJoin, AudioSplit, AudioPad and so on.

Working with video automation, It would be nice to have VideoJoin, VideoSplit, VideoPad and so on.

POSTED BY: Rodrigo Murta

I have already requested VideoJoin and ConformVideos actually. They know about it internally…

POSTED BY: Sander Huisman

VideoJoin introduced in 12.2 and updated twice.

VideoSplit also in 12.2 and updated in 12.3

POSTED BY: Sander Huisman

There is:

ImageMeasurements[image, "Transparency"]
POSTED BY: Piotr Wendykier

AlphaChannelQ

Doing exactly what you think it should do…

POSTED BY: Sander Huisman

Isn't that strange that this is missing? Here's how to do it: https://mathematica.stackexchange.com/a/157458/12 BTW the LibraryLink interface does have MImage_alphaChannelQ.

POSTED BY: Szabolcs Horvát

Agree, strange it is missing. I know of the circumventing methods, I was using the

RemoveAlphaChannel[img]===img

trick… But it is poorly documented for sure. RemoveAlphaChannel and SetAlphaChannel should definitely have it… I think it is worthy of its own function…

POSTED BY: Sander Huisman

Sander, RemoveAlphaChannel[img]===img is terribly inefficient and couldn't be considered more than a workaround. What I meant was

ImageMeasurements[img, "Transparency"]

which is perfectly good, except that it is very hard to find. One does not think of the presence of an alpha channel as something that need to be measured. Measurement implies computation, while this only needs to check a flag. When I originally needed this, I spent a lot of time searching and never found this.

@Piotr Wendykier, maybe there is an opportunity to improve the documentation here. As an interim measure, the hidden keyword alphachannelq could be added to the ImageMeasurements doc page. Also, I notice that almost all the "Global image properties" that ImageMeasurements can return that aren't really measured, just extracted, have their own function. We have ImageType, ImageChannels, ImageDimensions, ImageColorSpace, etc.

Even if I come across the ImageMeasurement doc page, I would assume (just by the name) that this is a function that computes properties like the mean pixel value. That is exactly what it does, and that is exactly what the immediately visible basic examples show. If I were to look for functionality to test for the presence of an alpha channel, even if I find the ImageMeasurements doc page, it would never occur to me to open the Details section and read through it carefully. A quick glance at the page would convince me that no, this can't possibly be the function I need.

Summary: Yes, the functionality exists, but it is near-impossible to find for many users. An AlphaChannelQ function would be a tangible improvement.

Additional note: Please add a link from the MImage_alphaChannelQ doc page to the ImageMeasreuments doc page.

POSTED BY: Szabolcs Horvát

Built in phase unwrapping like this.

POSTED BY: Sander Huisman

ListStepPlot3d should be an extension of ListStepPlot

POSTED BY: Neel Basu

A upgrade a NDSolve function to solve numerically Fractional differential equations:

  • ordinary fractional differential equation
  • ordinary fractional differential-algebraic equation
  • ordinary fractional delay differential equation(with a variable delay)
  • stochastic fractional differential equation
  • fractional partial differential equations
  • fractional stochastic partial differential equations
  • fractional partial random differential equations with state-dependent delay

My opinion these functions should be implanted already 20 years ago. Maple has fracdiff function and Matlab fde12

POSTED BY: Mariusz Iwaniuk

Examples for:

Functional (Delay ) Differential Equations Involving Caputo Fractional Derivative or Riemann-Liouville Fractional Derivative with initial conditions or boundary conditions .

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[x - 1] == 0, 
y[x /; x <= 0] == 0}, y[x], {x, 0, 1}];
NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y''[x - 4] == 0, 
y[x /; x <= 0] == 0, y'[x /; x <= 1] == 1}, y[x], {x, 0, 1}];
 NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[2 x - 1] == 0, 
 y[x /; x <= 0] == 0}, y[x], {x, 0, 1}];
 NDSolve[{CaputoD[y[x], {x, 5/2}] - 3  y[2 x - 1] == 0, 
y[x /; x <= -1] == 1, y[x /; x <= 1] == 1}, y[x], {x, 0, 1}];
NDSolve[{CaputoD[y[2 x], {x, 1/2}] - 3  y[x/4] == 0, 
y[x /; x <= 2] == Sin[x]}, 
 y[x], {x, 0, 
1}]; NDSolve[{CaputoD[y[2 x + 3], {x, 5/2}] - 3  y'[Sin[x]] + 
y''[5 x - 1] == 0, y[x /; x <= 2] == Cos[x], y'[x /; x <= 0] == 0,
y''[x /; x <= 1] == Exp[x]}, y[x], {x, 0, 1}];
NDSolve[{FractionalD[y[x], {x, 1/2}] - 3  y[x - 1] == 0, 
y[x /; x <= 0] == 0}, y[x], {x, 0, 1}];
NDSolve[{FractionalD[y[x], {x, 1/2}] - 3  y''[x - 4] == 0, 
y[x /; x <= 0] == 0, y'[x /; x <= 1] == 1}, y[x], {x, 0, 1}];
NDSolve[{FractionalD[y[x], {x, 1/2}] - 3  y[2 x - 1] == 0, 
y[x /; x <= 0] == 0}, y[x], {x, 0, 1}];
NDSolve[{FractionalD[y[x], {x, 5/2}] - 3  y[2 x - 1] == 0, 
y[x /; x <= -1] == 1, y[x /; x <= 1] == 1}, y[x], {x, 0, 1}];
NDSolve[{FractionalD[y[2 x], {x, 1/2}] - 3  y[x/4] == 0, 
y[x /; x <= 2] == Sin[x]}, 
y[x], {x, 0, 1}]; NDSolve[{FractionalD[y[2 x + 3], {x, 5/2}] - 3  y'[Sin[x]] + 
 y''[5 x - 1] == 0, y[x /; x <= 2] == Cos[x], y'[x /; x <= 0] == 0,
y''[x /; x <= 1] == Exp[x]}, y[x], {x, 0, 1}];

Reference: Link1, Link2, Link3, Link4 Link5 Link6

POSTED BY: Mariusz Iwaniuk

A upgrade a NDSolve function to solve integro-differential equations.

Examples:

eq = Inactivate[y'[x] + 2*Sin[y[x]] + 5*Integrate[y[t], {t, 0, x}] == 
Piecewise[{{0, x < 0}, {1, x >= 0}}], Integrate];
NDSolve[{Activate[eq], y[0] == 0}, y, {x, 0, 1}]

eq1 = Inactivate[y[x] - 1/2*Integrate[Exp[y[t]]*x*t, {t, 0, 1}] == 5/6*x, Integrate];
NDSolve[Activate[eq1], y, {x, 0, 1}]

eq3 = Inactivate[{y1[x] == 
    x^2 - 1/5*t^5 - 1/10*x^10 + 
     Integrate[y1[t]^2 + y2[t]^3, {t, 0, x}], 
   y2[x] == x^3 + Integrate[y1[t]^3 - y2[t]^2, {t, 0, x}]}, Integrate]
NDSolve[Activate[eq3], {y1[x], y2[x]}, x]

eq4 = Inactivate[{y1'[x] == 
      1 + x + x^2 - y2[x] - Integrate[y1[t] + y2[t], {t, 0, x}], 
     y2'[x] == -1 - x + y1[x] - Integrate[y1[t] - y2[t], {t, 0, x}]}, 
    Integrate];
NDSolve[{Activate[eq4], y1[0] == 1, y2[0] == -1}, {y1, y2}, {x, 0, 1}]

For solving:

Forms of linear integral equations:

  • Fredholm second kind
  • Fredholm first kind
  • Fredholm third kind
  • Wiener - Hopf
  • Volterra second kind
  • Volterra first kind
  • Renewal equation
  • Abel equation
  • Cauchy singularst

Forms of nonlinear integral equations:

  • Fredholm second kind
  • Urysohn second kind
  • Hammerstein
  • Urysohn first kind
  • Urysohn - Volterra
  • Hammerstein - Volterra second kind
  • Hammerstein - Volterra first kind
  • Chandrasekhar H equation
  • Cauchy singular

Fractional Calculus:

  • fractional integro-differential equations
  • fractional integro-differential equations with state-dependent delay(with a variable delay)
  • stochastic fractional integro-differential equation
POSTED BY: Mariusz Iwaniuk

Examples for:

Functional(Delay ) Integro-Differential Equations Involving Caputo Fractional Derivative with initial conditions or boundary conditions .

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[x - 1] + 
     Integrate[y[2 t], {t, 0, x}] == 0, y[x /; x <= 0] == 0}, 
  y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[x - 1] + 
     Integrate[y[t + 1], {t, -1, 1}] == 0, y[x /; x <= 0] == 0}, 
  y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[x - 1] + 
     Integrate[y[2 t - 1], {t, 0, 1}] == 0, y[x /; x <= 0] == 0}, 
  y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y''[x - 4] - 
     Integrate[y[2 t], {t, 0, x}] == 0, y[x /; x <= 0] == 0, 
   y'[x /; x <= 1] == 1}, y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[2 x - 1] + 
     Integrate[Cos[2 x + t] y[t], {t, 0, Pi}] == 0, 
   y[x /; x <= 0] == 0}, y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[x], {x, 5/2}] - 3  y[2 x - 1] + 
     Integrate[Exp[-3 x] y[t], {t, 0, Infinity}] == 0, 
   y[x /; x <= -1] == 1, y[x /; x <= 1] == 1}, y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[2 x], {x, 1/2}] - 3  y[x/4] - 
    Integrate[Exp[-3 I  x] y[t], {t, -Infinity, Infinity}] == 0, 
  y[x /; x <= 2] == Sin[x]}, 
 y[x], {x, 0, 
  1}];

NDSolve[{CaputoD[y[2 x + 3], {x, 5/2}] - 3  y'[Sin[x]] + 
    y''[5 x - 1] + 
    Integrate[y[Exp[t] - Abs[t]]/Sqrt[1 + t], {t, 0, x}] = = 0, 
  y[x /; x <= 2] == Cos[x], y'[x /; x <= 0] == 0, 
  y''[x /; x <= 1] == Exp[x]}, y[x], {x, 0, 1}];

Reference:

Link1 Link2 Link3

POSTED BY: Mariusz Iwaniuk

Is it really so important to bulk up the totality of built-in functions with yet another new one, RowReduceAugmented? It's just one or two steps to get that from existing functions, e.g.:

mat = RandomInteger[{-5, 5}, {4, 4}];
aug = Join[mat, IdentityMatrix[First@Dimensions@mat], 2];
RowReduce[aug]

And it's simple enough to define a single function to do all that:

RowReduceAugmented[mat_] := 
  RowReduce[Join[mat, IdentityMatrix[First@Dimensions@mat], 2]]

To some extent, "less is more": each new function added to the supply of built-in ones decreases a bit the ease of finding just the one you want. Live with a smaller number of them but become adept at combining them.

Just my opinion!

POSTED BY: Murray Eisenberg
Posted 7 years ago

Your point is well taken. And perhaps the best solution is to put your helpful snippets into the Mathematica help.

As someone who spent decades teaching mathematics and computation, you are well-posed to address this question. Does a simplified tool set help students to assimilate Mathematica more quickly? My thoughts are that an expanded, explicit toolkit would be welcomed by new users, with a trivial burden to acclimated users.

POSTED BY: Daniel Topa
Posted 8 years ago

An extremely valuable tool would be a RowReduce with the augmented identity matrix.

An example follows. Start with a matrix

example

The proposed command would augment the input matrix and reduce the system like so

proposed

The process of interest is depicted as

ear

Potential features: The null space vectors are red, range space blue. The partitioning separates $E_A$ from $R$.

At this point, we are in reach of the Holy Grail, resolving the four fundamental subspaces of the matrix. The command FTOLA[A] would produce the needed spans.

ftola

POSTED BY: Daniel Topa

I would like to see a function which is like KeyMap, but will handle key collisions by combining the corresponding values using a combiner function.

A possible implementation:

KeyCombine[fun_, asc_?AssociationQ] := GroupBy[Normal[asc], fun@*First -> Last]
KeyCombine[fun_, asc_?AssociationQ, comb_] := GroupBy[Normal[asc], fun@*First -> Last, comb]

Example usage:

KeyCombine[
 Sort,
 <|{1, 2} -> 10, {2, 1} -> 20, {1, 3} -> 30|>
 ]

(* <|{1, 2} -> {10, 20}, {1, 3} -> {30}|> *)

KeyCombine[
 Sort,
 <|{1, 2} -> 10, {2, 1} -> 20, {1, 3} -> 30|>,
 Total
 ]

(* <|{1, 2} -> 30, {1, 3} -> 30|> *)

Why introduce this function?

  • I find the concept intuitive. I can think in terms of KeyCombine.
  • I found several uses for it, including combining data where a single experimental subject may have been measured multiple times. It is closely related to the edge property combiner I asked for here
  • Perhaps a more efficient implementation is possible than the above GroupBy, which necessitates converting the association to a rule list first (not sure about this).

Why not?

  • Some might say that this GroupBy implementation is already simple enough. — Counter-argument: the original Merge implementation I came up with (see link below) is also simple but much slower. And we have ReverseSort now.

Link to StackExchange thread.

POSTED BY: Szabolcs Horvát
Posted 8 years ago

It would be pretty cool if TableForm would, um, put Datasets in TableForm.

POSTED BY: Matt Pillsbury
Posted 8 years ago

These may be too finicky to count, but it would be really nice if Select didn't give unpacked results when you pass in a packed list. A rigorous scientific study of the things that have annoyed me in the past month indicates that Select's current unpacking behavior is responsible for 137% of the performance problems with my code.

Also, the newish script mode is great, but it would be even better if it would direct $Messages to stderr on Mac/Linux, instead of stdout.

POSTED BY: Matt Pillsbury
Posted 8 years ago

More complete support for creating AsynchronousTaskObjects. Currently it seems the only ways to create an asynchronous task is URLSubmit and similar functions. However, there are quite a few other ways of doing things asynchronously, like StartProcess, which you can poll for output using the newish ReadString[..., EndOfBuffer], or just wait for it to finish using ProcessStatus. You can submit a task to a subkernel using ParallelSubmit and then use WaitNext/WaitAll to check to see if it's done, you can try doing a job in the cloud using CloudSubmit.

I may have forgotten a few.

As far as I can tell, there's no way to integrate all of these in a single place, which means learning and understanding a bunch of different APIs, and it means if you want to, say do a job on a sub kernel while you wait to check an external database where the result might be cached, you've gotta write a bunch of grotty custom code yourself. This is a shame.

POSTED BY: Matt Pillsbury

There is already support for this, but the asynchronous tasks must be programmed in C (LibraryLink), and the API is not explicitly documented. However, LibraryLink comes with several examples that show how to use them, and there are some posts on StackExchange that go into more details (based on these examples) and show additional examples:

POSTED BY: Szabolcs Horvát

I should also note that AsynchronousTaskObject is quite different from the other things you mention. Its unique capability is that once the task is done, it can trigger the evaluation of a function. This is the second argument of URLFetchAsynchronous.

StartProcess has an entirely different goal: start and manage other processes, including sending/receiving data.

ParallelSubmit is for parallel evaluation, with the goal of increasing performance. This uses subkernels, which are entirely separate processes.

I think the similarity you point out is superficial. These are entirely separate tools, and it doesn't make sense to unify them.

POSTED BY: Szabolcs Horvát
Posted 8 years ago

That capability isn't only useful for fetching URLs, though, and is why I'd like to see the unification. Being able to trigger a function when the the external process is done is potentially useful whenever there's some latency, which can happen when the latency comes from fetching a URL, doing some other task (since StartProcess can do virtually anything) or when performing a lengthy computation (which is what ParallelSubmit does).

I can emulate this functionality in these other cases, to be sure. One way is to use ScheduledTask.

POSTED BY: Matt Pillsbury

You are right, it would be nice to have a callback for when a process started with StartProcess terminates.

I am also hoping that the asynchronous LibraryLink stuff will get better documented, so we don't have to figure things out solely based on the example.

POSTED BY: Szabolcs Horvát

Hi Alexey,

That is indeed how it can be easily implemented for Take and Drop. For Part it seems to be more difficult to implement UpTo though... Especially for the multi-level cases, and in combination with set it gets complicated quite quickly...

a = {{1,2,3},{1,2,3,4,5},{1,2,3,4,5,6},{1,2}}
a[[All, ;; UpTo[3]]] += 1

would result in:

{{2,3,4},{2,3,4,4,5},{2,3,4,4,5,6},{2,3}}
POSTED BY: Sander Huisman

Being able to use UpTo inside Part:

{Range[6], Range[2], Range[8]}[[All, ;; UpTo[4]]]

would return:

{{1, 2, 3, 4}, {1, 2}, {1, 2, 3, 4}}

Also, in addition, being able to use UpTo with negative numbers (or make a new function called DownTo):

Take[Range[10], UpTo[-5]]
Take[Range[4], UpTo[-5]]

or alternatively:

Take[Range[10], DownTo[-5]]
Take[Range[4], DownTo[-5]]

would return:

{6,7,8,9,10}
{1,2,3,4}

I think this would be a very natural extension of UpTo

POSTED BY: Sander Huisman
ClearAll[DownTo];
DownTo/:Take[x_,DownTo[y_Integer]]:=Take[x,-Min[Length@x,Abs[y]]*Sign[y]]
DownTo/:Drop[x_,DownTo[y_Integer]]:=Drop[x,-Min[Length@x,Abs[y]]*Sign[y]]

Take[Range[10],DownTo[5]]
{6,7,8,9,10}

Take[Range[10],DownTo[-5]]
{1,2,3,4,5}

Take[Range[4],DownTo[5]]
{1,2,3,4}

Take[Range[4],DownTo[-5]]
{1,2,3,4}

Drop[Range[10],DownTo[5]]
{1,2,3,4,5}

Drop[Range[10],DownTo[-5]]
{6,7,8,9,10}

Drop[Range[4],DownTo[5]]
{}

Drop[Range[4],DownTo[-5]]
{}
POSTED BY: Alexey Golyshev

Functions like like Table and Do to be able to handle Associations with key and value like so:

Table[
    (*code*)
,    
    {k -> v, <|1->"a",2->"b"|>}
]

where k contains the key and v contains the value. Now one can only get the values, not the keys. So one has to iterate using indices and then get keys/values again...

Which form it is implemented does not really matter, though this is somewhat inspired by PHP. A form like {k,v} rather than k -> v is incompatible with the current implementation of iterators in e.g. Manipulate where v is the default value. Fortunately Table and Do do not have such 'default setting'. Though it might confuse people... Arrow notation is not very 'Wolframian' though, but only syntactic sugar in the end...

I'm aware of KeyValueMap, but is sometimes not very handy when a Do is needed (e.g. not storing intermediate data). Also multiline code is somewhat easier to write in Table, than in KeyValueMap as it requires a (pure) function... Especially if one wants to do something like:

Table[
    {k, #} & /@ v
 ,
    {k -> v, <|"a" -> {1, 2}, "b" -> {3, 4, 5}|>}
 ]
POSTED BY: Sander Huisman

I'd like to see a "MetaInformation" import element for MP3 files to read the ID3 tags. This element exists for other audio formats already and would be quite useful for MP3 as well.

POSTED BY: Bianca Eifert

A version of Pick that does not unpack packed arrays (or at least, repacks it after picking the rows). Already there is the data is packed…

POSTED BY: Sander Huisman

If the inputs are both packed, Pick is very fast (since version 8) and won't unpack:

Developer`PackedArrayQ@Pick[RandomReal[1, 100], RandomInteger[1, 100], 1]

(* True *)
POSTED BY: Szabolcs Horvát

A nice and efficient implementation of the Closest pair of points problem. Similar to Min[DistanceMatrix[...]] and NeighbourhoodGraph, but more efficient I think...

POSTED BY: Sander Huisman

I would like to see a new programming font with ligatures, comparable to Fira Code, but designed for Mathematica.

Fira Code improves the readability of code by employing ligatures. E.g., != displays as a slashed equal sign because it represents "not equal" in many languages. The underlynig text is not changed at all. It's simply displayed in a different way. Fira Code works with many programmer's editors, including IntelliJ IDEA for which we have an excellent Mathematica plugin. It is designed to play well with many languages, but it doesn't work so well with Mathematica. There are several other similar fonts, some designed specifically for one language (e.g. Hasklig).

I've been using Fira Code with C++ and I think it bring a genuine improvement in readability. But this is no surprise to Mathematica users as we already have a similar feature in the Front End. What I'd like is to be able to have this in any editor, as I only use the Front End for interactive work, not for writing packages.

Here's an illustration of how certain character combinations display with Fira Code. Right: with ligatures. Left: without. What's not really visible here is that Fira Code doesn't just change the glyph shapes, it also effectively changes the spacing, which has a big effect on readability. (Again, we know this from how the Front End works.) ... and :: don't change in shape, but they are shorter than three separate dots or two colons, so they have more space on the left and right. All this doesn't break the monospace nature of the font.

enter image description here

POSTED BY: Szabolcs Horvát

Extend TimeSeriesMap, to take Time as a second argument to the specified function (or first... when the two arguments are present).

Application: for instance, if I want to correct the value, depending on the date.

This allows for much more operations to stay within the TimeSeries framework (instead of having to get the values out ("Path"), processed, and put it back again into a TimeSeries.

POSTED BY: Pedro Fonseca

Leap seconds awareness.

Considering the increased focus on data science, can't we have a system that takes this mess into consideration?

If there's an apprehension that most users could get confused, it could eventually be added as an option to the time and date functions.

DateDifference[a, b, "LeapSecond"->True]

or

DateObject[{2015,6,30,23,59,59}]+Quantity[1, "Seconds"]
(*DateObject[{2015,7,1,0,0,0}]*)

vs

DateObject[{2015,6,30,23,59,59},"LeapSecond"->True]+Quantity[1, "Seconds"]
(*DateObject[{2015,6,30,23,59,60},"LeapSecond"->True]*)

Another example:

AbsoluteTime@DateObject[{2015,7,1,0,0,0}]
(*3644697600*)
AbsoluteTime@DateObject[{2015,7,1,0,0,0},"LeapSecond"->True]
(*3644697626*)
AbsoluteTime@DateObject[DateObject[{2015,7,1,0,0,0}],"LeapSecond"->True]
(*3644697626*)

Obviously there might be some dark corners (but I only gave this an hour of thought... the time to write this "post"). What should be the answer to this?

DateObject[3644697600]==DateObject[3644697626,"LeapSecond"->True]
(*there might be no answer, since we are comparing apples with oranges*)

Or how would DateListPlot work? Probably, just depending on how dates were specified. If the list of dates has some DateObjects that consider LeapSecond and others that don't, can it happen that we end up with reversed time? Is it an option for the plot, where we specify if we want everything to be converted to LeapSecond True or LeapSecond False (again, what would be the logic of comparing apples and oranges on the same plot?). By the way, I think that this plot should have a TimeZone option, so that I don't have to Block[{$TimeZone=whatever}, DateListPlot[...]]

Something more complex:

WindSpeedData["KSAC", {DateObject[{2008, 1, 1}], DateObject[{2015, 1, 2}]},"LeapSecond"->True]
(*most likely, it would return exactly the same thing as with LeapSecond->False, since there are probably no wind records on exactly the extra second... but the time series stamps would be kept with the LeapSecond->True, since that would allow for further analysis to take this specification into consideration*)

and its impact would be noticed on things like:

RegularlySampledQ[ ts ]
(*would true of false, depending on the TimeSeries specification*)

Obviously, there would be a:

$LeapSecond=True

By the way: most likely that LeapSecond is not a good choice for the option name, because in the future, we might get a MinuteLeap, etc. So, probably TimeLeap is better (if WL is still around in about a century).

POSTED BY: Pedro Fonseca

I'm curious, what applications is the inclusion of leap seconds so critical? I can't think of many to be honest... I heard that Google fixed it by stretching the second over the entire day, such that every second is just a tiny bit longer that day....

POSTED BY: Sander Huisman

What sense can we make of "AstronomicalData" (I mean, all data that used to be gathered as the AstronomicalData), and all the related physics field?

If Google considers that the seconds on that day got longer, they probably even messed it up more, since conversions are then needed for the entire day length, and not just as Events.

Daylight time saving is obviously more noticeable, since it hits more people. When processing records, the first thing I try to understand is if there is one hour of repeated dates or one hour of missing dates, or even worse, if the records were simply overlapped, which is generally difficult to detect...

POSTED BY: Pedro Fonseca

They just write all there logs with their own smooth stretched time, so there is no 'jump', which can cause all kind of weird behavior. After writing to the logs, they (presumably) just consider it as normal seconds. To me, I think it is a genius solution. If (e.g.) email arrivals are a few microseconds later, or even a second later, it doesn't really matter...

Also a problem is, is that future leap seconds are not known yet, they are irregular because the rotation of the earth is irregular. So how would one handle dates in the future also reliably? Or dates (far) before the leap second?

Regarding AstronomicalData is ~20 sec in 40 years really critical? that is 1 in ~63 million, most measured things have much much larger errors... I would be interested if it is critical (it might be, i don't know to be honest...)?

POSTED BY: Sander Huisman

If the convention had been this, why not. But it wasn't. And so, they just by-passed the problem on their very specific field.

I'm no specialist, but 20 seconds are probably already enough for a meteorite to hit our planet, or not...

But coming down to earth, I can think of many other cases where this might come in handy. Every data source with fast recording (seismic activity, critical equipments monitoring, bank transactions, etc.). Do we have double dates? Do we overlap dates, and lose records?

POSTED BY: Pedro Fonseca

You have a good point there! Maybe (like) UnixTime to have a function UTCTime that does all the not-so-nice arithmetic, but yeah it needs updates every half year to check for new possible leap seconds...

It's funny that half of the wiki page is about abolishing it and problems with it...

POSTED BY: Sander Huisman

We can abolish it, but we can't go back in time..

POSTED BY: Pedro Fonseca

By the way, Google method is actually much more interesting than I thought.

Assuming that my mass remains constant, should I consider that my weight changes for a day, or instead, a one day change on earth's time-space warping?

POSTED BY: Pedro Fonseca

A upgrade Limit function for a Multivariable( multidimensional limits ).

Example1:

f[x_, y_] := (x*y)/(x^2 + y^2)
Limit[f[x,y],{x->0,y->0}]
(* do not exist *)

Limit[f[x,y],{{x->0,Direction->1},{y->0,Direction->-1}}]
(* -1/2 *)

Limit[f[x,y],{{x->0,Direction->1},{y->0,Direction->1}}]
(* 1/2 *)

Let's check if not exist. With different path:

Limit[f[x, y] /. y -> m*x, x -> 0, Assumptions -> m > 0]
Limit[f[x, y] /. y -> m*x, x -> 0, Assumptions -> m < 0]
Limit[f[x, y] /. x -> c*y^2, y -> 0, Assumptions -> c > 0]
Limit[f[x, y] /. x -> c*y^2, y -> 0, Assumptions -> c < 0]
(*  1/2  ,-1/2,  0  ,0  *)
(* do not exist *)

Example2:

 f2[x_, y_] := (y^2*Sin[x])/(x^2 + y^2)
 Limit[f2[x,y],{x->0,y->0}]
 (* 0 *)

And Check:

Limit[f2[x, y] /. y -> m*x, x -> 0, Assumptions -> m == 1]
Limit[f2[x, y] /. y -> m*x, x -> 0, Assumptions -> m == -1]
Limit[f2[x, y] /. x -> c*y^2, y -> 0, Assumptions -> c == 1]
Limit[f2[x, y] /. x -> c*y^2, y -> 0, Assumptions -> c == -1]
(* 0, 0, 0, 0 *)

exist and is ZERO.

POSTED BY: Mariusz Iwaniuk

Version 11.2 introduces nested and multivariate limits and different directions.

POSTED BY: Sander Huisman

Hello Sander

Thanks for info.

My requests finally come true. :)

Regards Mariusz

POSTED BY: Mariusz Iwaniuk

I would like to see a revamped interpolation framework.

On ListStepPlot you can choose Left, Right or Center, while you can only have right with the Interpolation framework and the ListPlot always gives you the left, again, with no choice. While breaking current behaviour could be discussed, my main point here is options availability,

The amount of questions on SE on how to get an interpolation function out of a plot also seems to point to the fact that many of us would like to have more options on the Interpolation function.

In some way, shouldn't the Interpolation function be the interpolation kernel of all functions that use interpolation? And its options be populated as the Method options of all functions that use interpolation? If it is not up to the job, shouldn't it be revamped?

POSTED BY: Pedro Fonseca

The function Interpolation to have, next to InterpolationOrder, the option Extrapolation or ExtrapolationOrder. So you could do something like:

Interpolation[data,Extrapolation ->0]

would give 0 outside the domain of data. Or you could do:

Interpolation[data,ExtrapolationOrder ->0] 

then it would just keep it constant outside the domain...

POSTED BY: Sander Huisman

A new function to create Venn diagrams which is already available in W|A.

POSTED BY: Sander Huisman

In certain cases, with experimental data mostly I guess, you have a ListLinePlot, but the data might have gaps in time. Sometimes you want the data to be Joined, but not across those large gaps, an option "MaxConnectivity" (or so) which defaults to Infinity would be great! So basically it divides the data in to chunks that are connected, but not between the chunks, I now do this manually very often, but with multiple datasets, the PlotStyle commands also have to be copied the right amount of times and makes very 'clunky' code...

POSTED BY: Sander Huisman
Posted 9 years ago

So some kind of WyswygForm as an extension of TraditionalForm that preserves the order of output symbols "as is" would be nice to have

POSTED BY: Timur Gareev

I think you are looking for TraditionalForm@HoldForm[...].

Actually, it sounds like you are typing a formula and expect it to show as you typed it. Then why don't you just type it in a text cell and never evaluate it? If you don't evaluate it, nothing in it will change.

Press Alt-7 (Command-7 on Mac), then Control-9, then type your formula.

POSTED BY: Szabolcs Horvát
Posted 9 years ago

Sorry, I've missed not HoldForm but DisplayForm. DisplayForm // TraditionalForm My concern was different - I have self-made functions that have a nice output that can should be used in text that accompanies the presentation. HoldForm prevents evaluation that is needed.

POSTED BY: Timur Gareev

This post can be removed. Sorry my mistake.

POSTED BY: Mariusz Iwaniuk

A upgrade Derivative function for a symbolic differentiation, that is the computation of n-th order derivatives were n is a symbol.

Examples:

D[Log[a*x + b]^k, {x, n}]
(* a^n/(a*x+b)^n*Sum[Pochhammer[k-m+1,m]*StirlingS1[n,m]*Log[a*x+b]^(k-1),{m,0,n}] *)
D[BellB[a, x]^k, {x, n}]
(* Sum[Pochhammer[m-n+1,n]*StirlingS2[a,m]*x^(m-n),{m,0,a}] *)
D[BernoulliB[x], {x, n}]
(* Pochhammer[v-n+1,n]*BernoulliB[v-n,x]*)
D[Binomial[x, k], {x, n}]
(* Sum[(-1)^(m+k)*StirlingS1[k,m]*Pochhammer[m-n+1,n]*(x-k+1)^(m-n),{m,1,k}] *)
D[EulerE[v, x], {x, n}]
(* Pochhammer[v-n+1,n]*EulerE[v-n,x] *)
D[Sin[Cos[a*x + b]], {x, n}]
(* Sum[Sin[Cos[a*x+b+k*Pi/2]]*BellY[n,k,{Cos[a*x+b+Pi/2]*a,...,Cos[a*x+b+((n-k+1)*Pi)/2]}*a^(n-k+1)],{k,0,n}] *)
...

(* and for: 12 elliptic Jacobi functions as well as
the four elliptic JacobiTheta functions,the LambertW,LegendreP
and more function..... *)
POSTED BY: Mariusz Iwaniuk

nth order derivative can now be done (V11.1):

https://reference.wolfram.com/language/ref/D.html

POSTED BY: Sander Huisman

I'd like to see a new functions a Integral transform:

  • Mellin transform and inverse
  • Hankel transform and inverse
  • Hilbert transform and inverse

, but there seems to be no such function in Mathematica.

The hankel transform, sometimes referred to as the Bessel transform, has uses in particular types of differential equations.

The hilbert transform, sometimes called a quadrature filter, is useful in radar systems, single side-band modulators, speech processing, measurement systems, as well as schemes of sampling band-pass signals.

The Mellin and Inverse Mellin transforms is closely related to the Laplace and Fourier transforms and has applications in many areas, including:

  • digital data structures
  • probabilistic algorithms
  • asymptotics of Gamma-related functions
  • coefficients of Dirichlet series
  • asymptotic estimation of integral forms
  • asymptotic analysis of algorithms
  • communication theory
POSTED BY: Mariusz Iwaniuk

A upgrade LaplaceTransform function as the following below:

enter image description here

and much more.

LaplaceTransform[t*f[t], t, s]
LaplaceTransform[t*f'[t], t, s]
LaplaceTransform[t*f''[t], t, s]
LaplaceTransform[f[x + a], x, s](* Exp[a*s]*(LaplaceTransform[f[x],x,s]-Exp[a*s]*Integrate[Exp[s x]*f[x],{x,0,a}]) *)
LaplaceTransform[HeavisideTheta[x - a]*f[x - a], x, s](* Exp[-a*s]*LaplaceTransform[f[x],x,s]*)


 ...
LaplaceTransform[t^2*f[t], t, s]
LaplaceTransform[t^2*f'[t], t, s]
LaplaceTransform[t^2*f''[t], t, s]

 ...
and many more ....

Mathematica can't solve this, is very strange because these are the basics.

A upgrade LaplaceTransform function and InverseLaplaceTransform as the following below:

enter image description here

and much more....

 LaplaceTransform[ t^(\[Alpha] - 1)*MittagLefflerE[\[Alpha], \[Alpha], a*t^\[Alpha]], t, s];
 LaplaceTransform[MittagLefflerE[\[Alpha], -a*t^\[Alpha]], t, s];
 LaplaceTransform[1 - MittagLefflerE[\[Alpha], -a*t^\[Alpha]], t, s];
 LaplaceTransform[ t^(\[Beta] - 1)*MittagLefflerE[\[Alpha], \[Beta], a*t^\[Alpha]], t, s];

(*and inverse*)

 InverseLaplaceTransform[1/(s^\[Alpha] - a), s, t];
 InverseLaplaceTransform[s^\[Alpha]/(s*(s^\[Alpha] + a)), s, t]; 
 InverseLaplaceTransform[a/(s*(s^\[Alpha] + a)), s, t];
 InverseLaplaceTransform[s^(\[Alpha] - \[Beta])/(s^\[Alpha] - a), s, t];

 (*these  can't too*)
POSTED BY: Mariusz Iwaniuk

After 4 years, my dream has not come true yet. Mathematica 12.1.1 can't solve yet :(

POSTED BY: Mariusz Iwaniuk
Posted 9 years ago

I would like to see Maximize and Minimize functions with Option to return all solutions of optimization problem (as Langrangian method does). It it BTW irritating that documentation doesn't state explicitly that not all solutions may be returned.

POSTED BY: Timur Gareev

That would be useful, rather than having to use Reduce solve the KKT equations.

POSTED BY: Frank Kampas

Again no new function, but rather ease of functionality. Functions like:

  • Min
  • Max
  • Mean
  • StandardDeviation
  • Skewness
  • Median
  • Kurtosis
  • Variance

All work generally on the first dimension or on all dimensions (Min, Max). It would be great if all these function get a Level option. Of course I can achieve that with Map/Apply, but it does give very unreadable code. For example:

Mean[matrix,1]
Mean[matrix,2]
Mean[matrix,{2}]

Could mean: same as 'normal' mean (mean of rows), mean of mean (row and column), mean over 2nd dimensions (a column). Of course this could be extended to even higher dimensions. For me it strikes me as 'unpleasant' and lacking elegance to invoke powerful function like map and apply to do something so simple. The folks as matlab have similar syntax which I envy (though I never use it).

POSTED BY: Sander Huisman

p.s. The function Total already has this syntax, so why not extend it to other common functions like the ones I mentioned above...

POSTED BY: Sander Huisman

Just a small functionality request: it would be nice that FromUnixTime would be Listable by default. Of course I can do that myself every time, but I think it would be nice that the functionality is added.

POSTED BY: Sander Huisman

It would be nice that VoronoiMesh would return the Mesh (using MeshCells[...,2]) in the same order as the original points; now they are more or less random...

POSTED BY: Sander Huisman

It would be nice if Switch would compile without triggering MainEvaluate. So this:

SetSystemOptions[
   "CompileOptions" -> 
    "CompileReportExternal" -> 
     True]; 
Check[Compile[{{n, _Integer}}, 
   Switch[n, 1, 42, 2, 137]], 
  $Failed]

should not give $Failed as it does now.

Of course I can get the functionality of Switch by If's (something like:

Check[Compile[{{n, _Integer}},
  Module[{res = 0}, 
   If[n == 1, res = 42];
   If[n == 2, res = 137];
   res]], 
   $Failed]

but I fear it will be more inefficient. Or?

POSTED BY: Rolf Mertig

It should transform to:

If[n == 1, res = 42,
   If[n == 2, res = 137]
];

right? But what happens if none of them are true?

POSTED BY: Sander Huisman

A nice addition in various plot functions would be the addition of DataRange to accept also nonlinear specification. In many cases you have an array of data, and you can directly plug it in ListPlot3D or ListContourPlot, and you can set the x and y ranges very nicely. But what about cylindrical coordinates? or nonlinear coordinates? A specification like:

DataRange -> {{x1,x2,x3,x4,x5,x6...},{y1,y2,y3,y4,y5,y6,...}}

would allow for x and y coordinates that are independent. But for the dependent case (cylindrical case for example). It would be nice to have:

DataRange ->{{{x1,y1},{x2,y1},....{xn,y1}} , {{x1,y2},{x2,y2}....{xn,y2} ........}}

Basically an array with all the coordinates.

POSTED BY: Sander Huisman

Maybe not a new function, rather new functionality consistency:

data = {};
data[[All, 2]] = 3;       
Select[data, #[[2]] == 3 &];
MapAt[f, data, {All, 2}]
BinCounts[data, {0, 5, 1}]
BinCounts[data, {0, 5, 1},{1,5,2}]

So line 2 and 3 perfectly execute; they don't care it is an empty list. However MapAt does not work; it remains unevaluated. Quite strange I would say; in this case 'All' means 'all the zero elements' so I don't see the problem...

Then, BinCounts in 1D works on an empty list fine (it returns an array with zeros), but BinCounts in 2D does not work on an empty list!! I basically have my own safe-guard function that checks if Length[data]==0 then return a ConstantArray[0, dims]. Which is kind of not so nice.... I definitely think is a huge inconsistency, especially because 1D works, but not 2D!

POSTED BY: Sander Huisman

I don't see a perfect method using existing methods, because some intermediate storage seems to be needed. But one can just use tables to avoid unpacking.

MemoryInUse[]
MaxMemoryUsed[]

(* Out[772]= 607593496

Out[773]= 609310128 *)

In[774]:= n = 2000;
mat = RandomReal[1, {n, n}];
Developer`PackedArrayQ[mat]
SetSystemOptions["PackedArrayOptions" -> {"UnpackMessage" -> True}];
uppermat = ConstantArray[0., n*(n + 1)/2];
i = 1;
Do[uppermat[[i]] = mat[[j, k]]; i++;, {j, n}, {k, j, n}];
Developer`PackedArrayQ[uppermat]

Out[776]= True

Out[781]= True

MemoryInUse[]
MaxMemoryUsed[]

(* Out[782]= 671617616

Out[783]= 671739592 *)

mat // ByteCount
uppermat // ByteCount

(* Out[785]= 32000152

Out[786]= 16008144 *)

So the memory jusmp is about equal to size of mat plus size of uppermat (the bare minimum) plus another uppermat, hinting that maybe we had a copy done internally.

Can do similarly with Sow/Reap.

uppermat = Reap[Do[Sow[mat[[i, j]]], {i, n}, {j, i, n}]][[2, 1]];

It is slower and I think the intermediate memory used might be greater (different data structure than a packed array). It never unpacks anything though.

POSTED BY: Daniel Lichtblau

@Daniel Lichtblau Thanks for reminding me not to ignore the procedural approach! :-)

Can you find a way to compile this for better performance? A naive approach complains about

Compile::part: "Part specification uppermat[[i]] cannot be compiled since the argument is not a tensor of sufficient rank.

and I'm not sure how to tell it that it indeed is a vector.

cf = Compile[{{mat, _Real, 2}},
  Module[{uppermat, i, n},
   n = Length[mat]; (* assume square mat *)
   uppermat = ConstantArray[0., n*(n + 1)/2];
   i = 1;
   Do[
    uppermat[[i]] = mat[[j, k]];
    i++;,
    {j, n}, {k, j, n}
    ];
   uppermat
   ],
  {{uppermat, _Real, 1}}
  ]

A dedicated function would still be very nice though, but I realize that my use-case for this might be fairly unique. My LibraryLink solution takes 0.004 seconds for this 2000 by 2000 matrix vs 3 seconds for the Do loop. I tend to write a lot of LibraryLink code recently as LTemplate (even in this rudimentary form) makes it easy enough. Maybe a much better version of what I was trying to do with LTemplate would be a very useful future addition to Mathematica! The WTC2015 video on the new compilation technology was quite interesting.

POSTED BY: Szabolcs Horvát

The problem appears to be in use of ConstantArray. One workaroundchange that line to uppermat = Table[0., {n*(n + 1)/2}];. Another is to make sure it understands its second arg is an integer, by uppermat = ConstantArray[0., Round[n*(n + 1)/2]];. And you can get rid of the third argument to Compile now.

Once properly handled in compilation, this runs quite fast by the way.

POSTED BY: Daniel Lichtblau

Functions for extracting the lower and upper triangular parts of a matrix (and store it in a flat array).

I am not sure how much demand there is for this. I mentioned it because personally I need this very often. While this is easy to implement in terms of other functions, what I am looking for is (memory) efficiency: it should avoid unpacking arrays. Currently I use LibraryLink implementations specialized for real and integer types, so I can handle arrays that barely fit in memory. My preferred pure-Mathematica implementation is Join@@Pick[mat, LowerTriangularize[ConstantArray[1, Dimensions[mat]], -1], 1], but I needed better memory efficiency than what this can offer.

POSTED BY: Szabolcs Horvát

I would like to to be allowed to replace the clumpsy 'type specification'

f[x_ /; VectorQ[x, (Head[x==h])& ]] := ...

by the simple and suggestive form

f[x__h] := ...

The rational for this is: whenever one uses a head (e.g. 'par' for 'particle') for OO-like organization of a Wolfram Language program one probably also uses lists of objects with just this head (lists of particles in the example above). So, one should have an easy way to define functions for the treatment of lists (systems) of particles.

POSTED BY: Ulrich Mutze

What stops you from using

f[x:{___h}]:=...

?

POSTED BY: Leonid Shifrin

Thank you for this solution. I was not aware of

___h

Nevertheless I think that allowing

__h

(without a step of transforming a sequence into a list) would be desirable since it would be more natural to people who are used to

f[n_Integer, x_Real, z_Complex] := ...
POSTED BY: Ulrich Mutze

You can still do that, if you want. In such case you will have to put List around the match for __h on the r.h.s. For example,

total[elems___Integer]:= Total[{elems}]

But this would really mean taking a sequence as an argument, not a list / vector. So, if you want your argument to be a vector, this isn't right, semantically. The semantically correct way still would be

total[elems:{___Integer}]:=Total[elems]

Apart from this issue, the first definition has a number of others: you can't easily add other positional arguments (particularly in cases when their type is the same _h as that of your vector's elements), and also you will have to use Apply to pass an actual vector / list to such a function, like total @@ {1,2,3}. And for packed arrays, for example, Apply will unpack.

POSTED BY: Leonid Shifrin

Note that __h is already taken. It means one or more element with head h. ___h means zero or more elements with head h. These sequences of elements can be contained not only in a List, but also any other expression.

Finally, it is good to mention that patterns like {___Real} are optimized to work with packed arrays and will not unpack (nor will they test every element: all elements in a packed array are of the same type). VectorQ[arr, Head[#] === Real&] will unpack, and will test every element explicitly. Thus it will be slow. However, VectorQ with certain second arguments is optimized again and will not unpack (or test every element). Examples are VectorQ[..., NumberQ], VectorQ[..., NumericQ], VectorQ[..., IntegerQ], etc.

These observations are of interest only if you work with numerical types and packed arrays, of course.

POSTED BY: Szabolcs Horvát

@Szabolcs, thank you for freeing me from my misconception that __ deals with lists and ____ deals with sequences. You clarified the point and gave valuable additional information concerning VectorQ. Thanks again.

POSTED BY: Ulrich Mutze

RegionIntersectionQ or RegionOverlapQ, which gives True/False if two regions overlap. With an option if partial overlap of full overlap is required (i.e. does reg1 completely cover reg2 or partially).

POSTED BY: Sander Huisman

I think this would be a nice feature too. @Sander Huisman what applications you had in mind for this?

POSTED BY: Sam Carrettie

I'm for example looking at Voronoi diagrams, now I draw a line...which cells intersect with this line? I end up 'rasterizing' the line, and using RegionMember for each cell. Similarly say you want to look for Voronoi cells that are completely within the Convex hull; you have to do some magic now yourself. Probably one can think of more uses. A general construct would be very useful... Either calling it the names I proposed or expanding the RegionMember function to allow also for regions (any dimension) as the second argument.

POSTED BY: Sander Huisman

These are now (V11.1) included in the form of RegionDisjoint, RegionWithin, and RegionEqual.

POSTED BY: Sander Huisman

I'd like to see a new function called DChangeTransform[] which can change the variables of differential equations, but there seems to be no such function in Mathematica.

Examples:

DChangeTransform[D[f[x], {x, 2}] - f[x] == 0, {t = Exp[x]}, {x}, {t}, {f[x]}]
(* t^2*f''[t]+t*f'[t]-f[t]=0 *)
DChangeTransform[D[f[x], {x, 2}] - f[x] == 0, {t = Log[x]}, {x}, {t}, {f[x]}]
(* Exp[2*t]*f[t]+f'[t]-f''[t]=0 *)
DChangeTransform[D[f[x], x] - f[x] == 0, {t = Tan[x]}, {x}, {t}, {f[x]}]
(* (1+t^2)*f'[t]+f[t]=0 *)
DChangeTransform[D[u[x, t], {t, 2}] == c^2 D[u[x, t], {x, 2}], {a == x + c t, r == x - c t}, {x, t}, {a, r}, {u[x, t]}]
(* c (u^(1,1))[a,r]=0 *)
DChangeTransform[ D[z[x, y], {x, 2}] - D[z[x, y], {x, 1}, {y, 1}] - 2*D[z[x, y], {y, 2}] == 0, {u == 2*x + y, v == y - x}, {x, y}, {u, v}, {z[x, y]}]
(* -9 (z^(1,1))[u,v]=0 *)
DChangeTransform[x^2*D[f[x, y], {x, 2}] - 2*x*y*D[f[x, y], {x, 1}, {y, 1}] + y^2*D[f[x, y], {y, 2}] + x*D[f[x, y] == 0, {x, 1}] + y*D[f[x, y], {y, 1}], {u == x, v == x*y}, {x, y}, {u, v}, {f[x, y]}]
(* u ((f^(1,0))[u,v]+u (f^(2,0))[u,v])=0 *)
DChangeTransform[x*y^3*D[f[x, y], {x, 2}] + x^3*y*D[f[x, y], {y, 2}] - y^3*D[f[x, y], {x, 1}] - x^3*D[f[x, y], {y, 1}] == 0, {u == y^2, v == x^2}, {x, y}, {u, v}, {f[x, y]}]
(* u^(3/2) v^(3/2) ((f^(0,2))[u,v]+(f^(2,0))[u,v])=0  *)
DChangeTransform[D[f[x, y], x, x] + D[f[x, y], y, y] == 0, "Cartesian" -> "Polar", {x, y}, {r, t}, f[x, y]]
(* (f^(0,2))[r,t]+r ((f^(1,0))[r,t]+r (f^(2,0))[r,t])=0 *)
DChangeTransform[D[f[x, y, z], {x, 2}] + D[f[x, y, z], {y, 2}] + D[f[x, y, z], {z, 2}] == 0, "Cartesian" -> "Spherical", {x, y, z}, {r, t, s}, f[x, y, z]]
(*(Csc[t]^2 (f^(0,0,2))[r,t,s])/r^2+(Cot[t] \(f^(0,1,0))[r,t,s])/r^2+(f^(0,2,0))[r,t,s]/r^2+(2 \(f^(1,0,0))[r,t,s])/r+(f^(2,0,0))[r,t,s]=0  *)
DChangeTransform[(x^2 + y^2)*D[u[x, y], x, x] + D[u[x, y], y, y] == 0, "Cartesian" -> "Polar", {x, y}, {r, t}, u[x, y]]
(* 2 (-1+r^2) Sin[2 t] (u^(0,1))[r,t]+(1+r^2-(-1+r^2) Cos[2 t]) \(u^(0,2))[r,t]+r ((1+r^2-(-1+r^2) Cos[2 t]) (u^(1,0))[r,t]-2 (-1+r^2) \Sin[2 t] (u^(1,1))[r,t]+r (1+r^2+(-1+r^2) Cos[2 t]) (u^(2,0))[r,t])=0 \ *)
DChangeTransform[2 (-1 + r^2) Sin[2 t] D[u[r, t], {t, 1}] + (1 + r^2 - (-1 + r^2) Cos[2 t]) D[ u[r, t], {t, 2}] +  r ((1 + r^2 - (-1 + r^2) Cos[2 t]) D[u[r, t], {r, 1}] - 2 (-1 + r^2) Sin[2 t] D[u[r, t], {r, 1}, {t, 1}] + r (1 + r^2 + (-1 + r^2) Cos[2 t]) D[u[r, t], {r, 2}]) == 0, "Polar" -> "Cartesian", {r, t}, {x, y}, u[r, t]]
(* (u^(0,2))[x,y]+(x^2+y^2) (u^(2,0))[x,y]=0 *)
POSTED BY: Mariusz Iwaniuk

Given the number of replies here, perhaps the post should be broken up in subcategories such as Graphics, Symbolics, Numerics, Data Manipulation.

POSTED BY: Frank Kampas

Perhaps even a separate 'group'...

POSTED BY: Sander Huisman

I would like to see better support for vectorization when working with conditionals.

Examples:

  • Select all elements of an array satisfying some inequalities
  • Replace all elements of an array with 0 or 1 depending on whether they satisfy some inequalities of equalities

Doing the first one for 1D arrays is already easy for arbitrary criteria (not just inequalities). We have Select and Cases. But they are not fast. If we restrict ourselves to inequalities only, there are much faster ways to do it. These operations can be formulated as simple arithmetic done on packed arrays, combined with UnitStep and Unitize. Vector-arithmetic is very fast.

For example, let us select all elements of an array that are >= 3. The solution:

mask = UnitStep[arr-3];
Pick[arr, mask, 1]

This is much faster than Select but also much less readable and much more complicated to write. Especially if we now require > 3 (strictly greater). Then we end up with the convoluted

mask = 1 - UnitStep[3 - arr]
Pick[arr, mask, 1]

Do the same for a multidimensional array for a complicated criterion like x > 0 || x < -1 and we quickly end up with some very opaque code which might easily contain a small mistake ...

In MATLAB it is so much easier to do the same thing. It would be simply

arr( (arr > 0) | (arr < -1) )

I wrote the BoolEval package to make it easy in Mathematica too. With this package we can simply do

BoolPick[arr, arr > 0 || arr < -1]

and get almost the same performance as the UnitStep based construction while retaining readability, ease of writing and reducing the possibility of mistakes. But the package is of course not perfect, it struggles with multidimensional arrays, and due to the translation it needs to do from logical expressions to a UnitStep based form, it is not quite as fast as a directly written UnitStep.

I would like to see functionality similar to this built in. I'm not sure what is the best way to expose it, what I did in the BoolEval package might not be the best way (it likely isn't). That is why I wrote up in detail what sorts of problems I want to solve. I have been using this package regularly over the last couple of years so I am convinced that this functionality is needed.

Witness how dramatic the performance differences can be:

In[53]:= arr = RandomReal[1, 1000000];

In[54]:= Pick[arr, UnitStep[arr - 0.5], 1]; // AbsoluteTiming
Out[54]= {0.019223, Null}

In[55]:= BoolPick[arr, arr >= 0.5]; // AbsoluteTiming
Out[55]= {0.093456, Null}

In[56]:= Select[arr, # >= 0.5 &]; // AbsoluteTiming
Out[56]= {0.481385, Null}

If you search this thread for the term "vectorization", you'll find my earlier post requesting a UnitStep like function that returns 0 for 0 (UnitStep[0] is 1). The motivation was the same.

POSTED BY: Szabolcs Horvát

I can't agree more, I'm also a heavy Select user. It is even worse once one uses the new operator form:

arr = RandomReal[1, 1000000];
Select[arr, # >= 0.5 &]; // AbsoluteTiming
Select[arr, GreaterEqualThan[0.5]]; // AbsoluteTiming

which is roughly 1.5 times slower. (almost as bad as Replace[arr, _?(# < 0.5 &) :> Nothing, {1}]).

I can imagine that in an updated version of Select it internally translates these 'simple' criterions for the case of larger lists (100+ or so), like you did with your BoolPick function.

Apart from that, it would be nice to have a Select with a level specification, I constantly use Map and Select and create these crazy construct of pure function within pure functions, probably not the fastest and most elegant solution.

EDIT: Just slightly faster than the Select is to use Pick and Thread:

Pick[arr, Thread[arr >= 0.5], True] // Length // AbsoluteTiming
POSTED BY: Sander Huisman

This problem is quite bit improved in V10.4 where the operator form is only 10% slower

POSTED BY: Sander Huisman
Posted 9 years ago

I would like to have something like ShapeMatrix (analoguos for DiskMatrix, CrossMatrix or DiamondMatrix) for an arbitrary black-and-white 2D shape or special symbol, that can be used as an input.

POSTED BY: Timur Gareev

How would the shape be specified? There is already Rasterize which works both for graphics and other things that can be shown in a notebook (text). I am not sure how consistent is Rasterize between different platforms for graphics. I know that is it not completely consistent for text due to the font rendering of different OSs.

POSTED BY: Szabolcs Horvát
Posted 9 years ago

at first glance, Rasterize looks appropriate as one of options.

POSTED BY: Timur Gareev

Maybe some pure function?

Norm[{##}]<=1 would be DiskMatrix,

AnyTrue[{##}, # == 0 &] & would be CrossMatrix,

Norm[{##},1]<=1 DiamondMatrix ...

Combined with some Array function...

which should work in all dimensions, Rasterize is kind of limited to 2 dimensions, for higher and lower dimensions you will have to combine multiple rasterizes in some way...

POSTED BY: Sander Huisman

It would be nice that the BlockMap function would be extended as to handle overhang like kl and kr in Partition. And also the support for padding like Partition would be very welcome. Basically following the exact same arguments the function Partition has.

In addition to BlockMap, it would be nice to have a new function called BlockApply for completeness.

POSTED BY: Sander Huisman

Developer`PartitionMap has the desired functionality of BlockMap. Even more Developer`PartitionMap is slightly faster.

?Developer`PartitionMap

What do you expect from BlockApply? Have you seen ArrayFilter?

POSTED BY: Alexey Golyshev

I didn't know about PartitionMap, but it seems that BlockMap uses PartitionMap internally:

Trace[BlockMap[g, Range[5], 2, 1]]

What I would like to see from BlockApply is this:

BlockApply[g, Range[5], 2, 1]

{g[1, 2], g[2, 3], g[3, 4], g[4, 5]}

which would be equivalent to: BlockMap[g @@ # &, Range[5], 2, 1]

Basically it will be the Apply equivalent just like you have the Map/ BlockMap pair.

POSTED BY: Sander Huisman

You are right. It seems that BlockMap uses PartitionMap internally. Bug in Trace? ;-)

If BlockMap is PartitionMap without any optimization, it's strange that not all the functionality has been activated.

POSTED BY: Alexey Golyshev

I think it is without optimization, just a lot of safety-checks from what I can decypher from the trace results..

POSTED BY: Sander Huisman

I'd like to see a new function a ConvertFunction[ ]. The convert function is used to convert an expression from one form to another.

ConvertFunction[expr, Method -> target-expr] -> attempts to convert the specified expr to the specified target-expr

Examples:

ConvertFunction[BesselK[1/3, x], Method -> AiryAi]
(*  \[Pi]*Sqrt[3^(1/3)*2^(2/3)/(x^(2/3))]*AiryAi[1/2*3^(2/3)*2^(1/3)*\x^(2/3)]  *)
ConvertFunction[KelvinKei[n, x^2], Method -> BesselK]
(*  1/2*I*(Exp[1/2*I*n*Pi]^2*BesselK[n,(1/2-1/2*I)*x^2*Sqrt[2]]-\
BesselK[n,(1/2+1/2*I)*x^2*Sqrt[2]])/Exp[1/2*I*n*Pi]] *)
ConvertFunction[BesselJ[n, x], Method -> {HankelH1, HankelH2}]
(*  1/2*HankelH1[n,x]+1/2*HankelH2[a,x] *)
ConvertFunction[Piecewise[{{1, x < 0}, {2, x < 1}, {3, x < 2}}],Method -> HeavisideTheta]
(*  1+HeavisideTheta[x]+HeavisideTheta[-1+x]-3*HeavisideTheta[-2+x]  *)
ConvertFunction[Gamma[n + 3/2]/(Sqrt[Pi]*Gamma[n]),Method -> Binomial]
(*  n*(n+1)*Binomial[n+1/2,-1/2]  *)
ConvertFunction[1/2*x*Pi^(1/2)*(-2*x^2 + 2)^(1/4)*LegendreP[-1/2, -1/2, -2*x^2 + 1]/(-2*x^2)^(1/4), Method -> ArcTrig]
(* ArcSin[x]  *)
ConvertFunction[Exp[x^2] - 2*Sinh[x^2], Method -> Exp]
(* Exp[-x^2]  *)
ConvertFunction[Cos[x]*Sin[x], Method -> {Exp, Log}]
(*  -1/2*I*(1/2*Exp[I*x]+1/2*Exp[-I*x])*(Exp[I*x]-Exp[-I*x]) *)
ConvertFunction[Cos[x]*Sin[x], Method -> Tan]
(*  2*Tan[1/2*x]*(1-Tan[1/2*x]^2)/(1+Tan[1/2*x]^2)^2 *)
ConvertFunction[Sin[x], Method -> Sum]
(* Sum[(-1)^n*x^(2*n+1)/(2*n+1)!,{n,0,Infinity}]  *)
ConvertFunction[Sqrt[(1 - Sqrt[1 - x])/x], Method -> Sum]
(* Sum[Sqrt[2]*(4*n)!*16^-n*x^n/(((2*n)!)^2*(2*n+1)),{n,0,Infinity}] *)
ConvertFunction[WhittakerM[0, 1/2, x], Method -> ElementaryFunction]
(*  -2*I*Sin[1/2*I*x]  *)
ConvertFunction[DawsonF[x], Method -> Erf]
(*  -(-1/2*I*Sqrt[Pi]*Erf[I*x])/Exp[x^2]  *)
ConvertFunction[Gamma[x], Method -> Factorial]
(*  (x-1)!  *)
ConvertFunction[LegendreP[1/2, x], Method -> {EllipticK, EllipticE}]
(*  2*Sqrt[2]*Sqrt[z+1]*EllipticE[Sqrt[(z-1)/(z+1)]]/Pi-2*Sqrt[2]*\EllipticK[Sqrt[(z-1)/(z+1)]]/Pi/Sqrt[z+1] *)
POSTED BY: Mariusz Iwaniuk

For:

ConvertFunction[Piecewise[{{1, x < 0}, {2, x < 1}, {3, x < 2}}], 
 Method -> HeavisideTheta]

we can use:

 InverseLaplaceTransform[
  LaplaceTransform[Piecewise[{{1, x < 0}, {2, x < 1}, {3, x < 2}}], x, 
   s], s, x]

 (*2 - 3 HeavisideTheta[-2 + x] + HeavisideTheta[-1 + x]*)
POSTED BY: Mariusz Iwaniuk
Posted 9 years ago

It would be nice if WeightedData would take an association of values to weights as its argument; then you could use the results of Counts and CountsBy with it directly.

POSTED BY: Matt Pillsbury

Another useful function would be a function like Select, but that return two lists, the ones selected and the ones removed. So you can easily (for instance) separate a dataset in to two groups based on a threshold... This can be achieved using GatherBy and a function, but might be less intuitive...

POSTED BY: Sander Huisman

Reap + Sow with tags

selectSplit[data_,patt_]:=Reap[Sow[#,patt@#]&/@data;][[2]]

data=Range[10];
selectSplit[data,OddQ]
{{1, 3, 5, 7, 9}, {2, 4, 6, 8, 10}}
POSTED BY: Alexey Golyshev

I have similar implementation but there are always problems occurring when the data is (in your case) only odd, only even, or undecisive data is included... Of course this can also be fixed... You would always like an answer back that is of the same form {{gives data that gives True},{all data that give False}}... not the other way around ( {False, True} ), and not {False}, {True} et cetera....

POSTED BY: Sander Huisman

One might call the function Separate (which is not used yet, and sounds like what it does). The words Divide and Split are already used..

POSTED BY: Sander Huisman
SetAttributes[Separate, HoldRest]

Separate[data_, patt_] := Module[
  {res},
  res = GatherBy[data, patt];
  If[patt@res[[1, 1]], res, Reverse@res]
  ]

Separate[data, EvenQ]
{{2, 4, 6, 8, 10}, {1, 3, 5, 7, 9}}
POSTED BY: Alexey Golyshev
Posted 9 years ago

It's really easy with the new GroupBy function:

selectSplit[pred_] :=  GroupBy[pred] /* Lookup[{True, False}];
selectSplit[list_, pred_] := selectSplit[pred][list];

selectSplit[Range[10], OddQ]
(* {{1, 3, 5, 7, 9}, {2, 4, 6, 8, 10}} *)

Since M10 came down the pike, I've discovered that it's often most convenient to define the curried "operator form" of a function first, because it can be done very concisely as a "pipeline" of composed functions, and then define the "ordinary" form in terms of it.

POSTED BY: Matt Pillsbury

Both solutions do not account for the case when one of the groups (or both) is/are empty, so you need quite involved code to account for those cases as well:

ClearAll[Separate]
SetAttributes[Separate, HoldRest]
Separate[data_, patt_] := Module[{res},
  res = GatherBy[data, patt];
  {SelectFirst[res, If[Length[#] > 0, patt@#[[1]], False] &, {}], 
   SelectFirst[res, If[Length[#] > 0, ! patt@#[[1]], False] &, {}]}
  ]
Separate[Range[10, 20, 2], EvenQ]
Separate[Range[11, 21, 2], EvenQ]
Separate[Range[10, 20, 1], EvenQ]
Separate[{}, EvenQ]

making it a little less 'elegant'.

POSTED BY: Sander Huisman

a GroupBy, rather then a Gather(By) solution might be more elegant indeed...

POSTED BY: Sander Huisman
Posted 9 years ago

Oh, good point. You can't use the pipeline, but the default value argument for Lookup means it can still be pretty clean:

 ClearAll[selectSplit];
 selectSplit[pred_] :=
  Lookup[GroupBy[#, pred], {True, False}, {}] &
 selectSplit[list_, pred_] := selectSplit[pred][list];

selectSplit[Range[2, 10, 2], EvenQ]
(* {{2, 4, 6, 8, 10}, {}} *)
POSTED BY: Matt Pillsbury

This solution is indeed robust and I like it, Thanks! Though I would disregard the pure function in favour of a slightly different algorithmic structure:

 ClearAll[selectSplit];
 selectSplit[pred_][list_List] := selectSplit[list, pred]
 selectSplit[list_List, pred_] := Lookup[GroupBy[list, pred], {True, False}, {}]

 selectSplit[Range[2, 10, 2], EvenQ]
 selectSplit[Range[3, 11, 2], EvenQ]
 selectSplit[{}, EvenQ]
 selectSplit[EvenQ][4]
 selectSplit[3][{1, 2, 3}]

this will also immediately fend off the penultimate one from being evaluated...

POSTED BY: Sander Huisman

O, and one might want that in the group for which the pattern does not much also includes the indecisive ones like an unassigned symbol (variable name)...

POSTED BY: Sander Huisman

I would like to see much faster histogram calculations, both in Histogram itself and in BinCounts.

Histogram is a very flexible function that can compute many different kinds of histograms. Surely all of these can't be fast. But the most common cases should have very fast code paths. In the very least, when bin sizes are uniform and explicitly given, Histogram should be much faster, both in 1D and multiple dimensions.

I know that BinCounts is faster, but why can't Histogram be just as fast? Also, BinCounts is not nearly as fast as it could be, and can be outperformed using Mathematica code (no need to resort to C):

http://community.wolfram.com/groups/-/m/t/237660

Here's more proof that people need better performance:

http://mathematica.stackexchange.com/questions/96392/fast-1d-bincounts-alternative/96395#96395

On multiple occasions I was forced to implement histogramming using LibraryLink. Such a basic operation should be built in and perform much better than it currently does (for million to billion element numerical arrays).


A closely related request is to make it easier to plot pre-binned data as a histogram, with the same flexibility as Histogram allows for unbinned data. This would be handy for plotting data from BinCounts or custom histogramming functions.

http://community.wolfram.com/groups/-/m/t/593095


After this criticism of Histogram I should also point out that it is a very flexible and versatile function which makes it easy to create nice visualizations with little code. One of the big reasons why I want Histogram to perform better instead of being content with my own implementation is because I want access to all its features without compromising on performance.

POSTED BY: Szabolcs Horvát

It is strange that it takes so long indeed, because if you check Trace on a simple BinCounts:

Trace[BinCounts[{0.1, 0.5, 0.7, 0.75, 0.9}, {0.0, 1.0, 0.1}]]

and if I understood correctly, it relies on division (by the binwidth), using Floor to force it to an integer, and then subtraction to have the first bin as '1'. Then it uses Tally to well...tally it. However it might be that Tally in this case is not so efficient, as it is probably a n*log(n) algorithm... might be more efficient just to to go through the array to find the minimum and the maximum, if the maximum-minimum << n then make a ConstantArray[0, number of bins] (or sparse array?) and then to cycle through the point 1 by 1 and add 1 to the nth bin. if maximum - minimum > n it might be more advantageous to use Tally...

Unfortunately one can not use Trace on Tally, so probably in the Kernel (c), while BinCounts is not.

I think they have one general algorithm for an N-dimensional binning and little (no) special cases for the simple cases you mentioned.

But yes, such things should indeed be sped up! Just yesterday I was binning a couple million 2D-points, which took (what felt) forever. And that included explicit specifications in both directions...

And while they are at it, please also create my BinListBy or give me back indices in a function called BinIndices.

POSTED BY: Sander Huisman

Some more specific Clustering algorithms, like DBSCAN and OPTICS...

POSTED BY: Sander Huisman

I would like to see better support for package development, and more functions that would typically be used in this context (as opposed to interactive work). It would be nice to have more of the Developer` context symbols documented. I would like to see some of the Internal` context (and other undocumented) functions lifted into the Developer` context and being officially supported. I am thinking of things like PositiveMachineIntegerQ, RealValuedNumericQ, WithLocalSettings, etc. These are things that are typically (very) useful during package development, but less so during everyday work.

POSTED BY: Szabolcs Horvát
Posted 9 years ago

New Functions I would like to see in future Wolfram Language versions

POSTED BY: Timur Gareev
Posted 9 years ago

I would like to have something like a ListMask[] function with the following aim. Say, I have a nested list nlist1 = {a, {b}, {{c}}}. I would like to apply its "List mask" {_,{_},{{_}}} to an arbitrary flatten list of the same length as Flatten[nlist1], i.e. list {x, y, z} to get {x, {y}, {{z}}}.

POSTED BY: Timur Gareev

I'm not sure if this warrants a new function, especially with the name you proposed. ListTransform, ListReshape, ListShape et cetera might be better. However, I think, it's use is too specific; your function is however easy to implement:

ListMask[pattern_,list_List]:=ReplacePart[pattern,Thread[Position[pattern,Verbatim[_]]->list]]
ListMask[{_,{_},{{_}}},{1,2,3}]
POSTED BY: Sander Huisman
Posted 9 years ago

I was thinking about the function that would get the nested list in place of your pattern (so the idea is to grab pattern from a list). Besides I noticed such approach may be too slow for hugde list, i.e. just the first thing that came to mind ListMask[Split[list], Range[Length[list]]]

POSTED BY: Timur Gareev

This will 'copy' the structure of 1 list to another..

ListMask[list1_List,list2_List]:=With[{l=Replace[list1,_:>_,{-1}]},ReplacePart[l,Thread[Position[l,Verbatim[_]]->list2]]]
ListMask[{1,2,{3},{{4}}},{5,6,7,8}]
POSTED BY: Sander Huisman

And this will work with very complicated nested arrays of length ~80000 as well, quite fast:

tmp = Nest[{#, {#}} &, Range[5], 14];
Length[Flatten[tmp]]
ListMask[tmp, Range[Length[Flatten[tmp]]]];
POSTED BY: Sander Huisman
Posted 9 years ago

~80 000 may be, but 10^6 with relatively flatten structure took me 1+ minut of running. Nonetheless, thank you. It was just an idea.

POSTED BY: Timur Gareev

If it is relatively flatten then other strategies might be a better option... i took the most general case...

POSTED BY: Sander Huisman

Integration of Intel DAAL: increasing performance of existing methods in Classify/Predict + adding the new ones (AdaBoost, BrownBoost, LogitBoost) + online learning + outlier detection + association rules.

POSTED BY: Alexey Golyshev
Posted 9 years ago

One that I've had to implement myself, and which I suspect could be a lot more efficient if it were built in, is PrimeRange, which returns all the primes between its lower and upper bound. Currently I have something built on Prime and PrimePi, which works, I guess....

POSTED BY: Matt Pillsbury

Regarding the frontend:

  1. It would be nice to also have a function that has been implemented already during typing. When you type a closing brackets, it will highlight them both momentarily. It would be nice to have this extended; if you cursor/caret is next to a bracket, it would be nice to highlight the other side of the bracket, so you can easily find out in expression that end on: ]}]] which one is which. Now included (V11 or V11.1?)

  2. In addition it would be nice that once you have your caret on a variable, that it will highlight (bold?) all the instances of this variable.

  3. It would be nice to have a right-click "View definition" if you click on any function, if it is built-in then use the built-in search, otherwise go to the function in the code.

POSTED BY: Sander Huisman

I like a significant front-end improvement regarding (accidentally) showing big expressions. Now it will show an intermediary dialog with a short/shallow form of it with buttons of 'show more', but generating this sometimes takes sometimes forever or crashes it. I would like (an option) to disable this behaviour. I'm often working with variables of a gigabyte or more (big matrices). One typo and the front-end crashes, no way around it. Just a message saying "Expression too big to view" as output is fine, no need for this dialog with options and truncated views.

POSTED BY: Sander Huisman

We already have MapThread and MapIndexed, but a combination MapThreadIndexed would also handy be handy on some occasions (though easily made by myself using MapThread with 'another' list with indices (that is how MapIndexed works?!?)).

POSTED BY: Sander Huisman

Another addition would be to support custom PlotThemes, I know it is (kinda) possible now using (the hidden function):

Themes`AddThemeRules["epic",
  DefaultPlotStyle -> Directive[Blue,Opacity[0.5]],
  Background -> LightBlue,
  AxesStyle -> Red
]

but does not work well when you combine multiple themes...

POSTED BY: Sander Huisman

I don't see any difference with Mathematica 10.3.1 from a default plot theme if I use that "epic" rule and evaluate, say:

 Plot[Exp[-x] Cos[x], {x, 0, 2 Pi}, PlotTheme -> "epic"]
POSTED BY: Murray Eisenberg

Hmm try this (clean fresh kernel):

Themes`AddThemeRules["epic", 
 DefaultPlotStyle -> Directive[Blue, Opacity[0.5]], 
 Background -> LightBlue, AxesStyle -> Red]
Plot[Exp[-x] Cos[x], {x, 0, 2 Pi}, PlotTheme -> "epic"]

You should see a very ugly plot!

for me $Version is "10.3.1 for Mac OS X x86 (64-bit) (December 9, 2015)"

POSTED BY: Sander Huisman

My fault: when I copied and pasted your code for the Themes`AddThemeRules expression, I somehow introduced an error.

The undocumented Themes context has some interesting items, e.g.:

 Themes`ThemeGallery[]

I hope Themes`AddThemeRules gets officially documented and kept in the language: it's a very handy way to consistently treat plots.

POSTED BY: Murray Eisenberg

When can I finally copy a DMSString like this: DMSString[$GeoLocation]:

"45\[Degree]45'0.000\"N 4\[Degree]50'24.000\"E"

without getting those annoying [Degree]s in there, I would like to see the ° symbol! Inside Mathematica this copy-paste makes sense, but outside it doesn't..

Also more customisability for DMSString when given a GeoLocation would be very welcome!

POSTED BY: Sander Huisman

I'd like to see a new function Numerical inverse Laplace transform.

Gives a numerical approximation to the inverse Laplace transform of expr evaluated at the numerical value t, where expr is a function of s.

An Example:

f[s_] := 1/(s^2 + 1);
InverseLaplaceTransform[f[s], s, t] /. t -> 1
(*0.8414709848078965*)

 NInverseLaplaceTransform[f[s], s, 1]
 (*0.8414709848078965*)

Can't solve:

  f1[s_] := Exp[s]/s^3;
  f2[s_] := Exp[s]/s^2;
  f3[s_] := 1/Sin[s];
  f4[s_] := s/Gamma[s];

and the list goes!

See Web links: Link1 Link2 LInk3

POSTED BY: Mariusz Iwaniuk

Isn't this just

InverseLaplaceTransform[f[s], s, 1.]

POSTED BY: Jon McLoone

InverseLaplaceTransform is a symbolic solver.So it works only in simple cases.Take, for example:

g[s_] := Sqrt[Log[s]/s];
InverseLaplaceTransform[g[s], s, 2.]
(*{InverseLaplaceTransform[s/Log[s], s, 2.]}*)

Can't calculate at t = 2.

So why this new feature is needed

A numerical solver of InverseLaplaceTransform.

NILT = Compile[{{t, _Real}, {n, _Integer}, {e, _Real}, {a, _Real}}, 
Module[{k}, ((1/4) Exp[a t + e]/ t) ( ((1/2) Re[f[s]] /. s -> a + e/t) + Sum[Re[ Exp[(1/4) I k Pi] (f[s] /. s -> a + e/t + (1/4) I k Pi/t )], {k, 1, n}] )]];
 t = 2;
 n = 50000;
 e = 2;
 a = 0;
 NILT[t, n, e, a]
 (*-1.05866*)

for g[s]= Sqrt[Log[s]/s] at t =2 is -1.05866.

POSTED BY: Mariusz Iwaniuk

I would like Mathematica to include a primordial function such as Primordial [n_] := Product [ Prime [ I ] , {I,1,n } ] .Primorialsl are very useful in finding twin primes as they are found at or close to primorials .For example, the first 4 primorials are { 2, 6 ,30, 2310, 30030 }.Twin primes are on either side of these primorials. They are { {1,3}, { 5,7} , {29, 31 } ,{ 2309 , 2311} } .Primorials are also of use in factoring.

only 79 percent? Actually, I think that's impressive.

POSTED BY: Frank Kampas

@Frank Kampas The number is impressive by itself, until you see that Maple solves 92%, and does it on average 25x faster!

POSTED BY: Sander Huisman

Sander, please post an example of a DE solved by Maple but not Mathematica.

POSTED BY: Frank Kampas

Have a look at his exhaustive list:

http://12000.org/mynotes/kamek/version1/KERNEL/KEse1.htm#x3-20000

there is a section at the bottom:

Solved by Mathematica but not by Maple

and vice versa...

POSTED BY: Sander Huisman

I looked at a couple examples of DEs solved by Maple and not Mathematica and the Maple "solutions" had integrals in them. Not what I regard as a solution.

POSTED BY: Frank Kampas

Though not a 'full' solution. I'd rather have an explicit solution for (say) y in terms of an integral, than an unsolved ODE... Again, I'm not saying Mathematica is bad at solving ODEs. But I think there is still room for improvement.

I'm just curious though if there are mistakes in the solutions of Maple or Mathematica... Not giving a solution is better than a wrong one... I think both companies should do a showdown ;-)

POSTED BY: Sander Huisman

It's probably possible to write a program that goes through a list of DEs that Mathematica claims to solve and check to see if the solutions are correct.

POSTED BY: Frank Kampas
DSolve[y'[x] - y[x]^2 - y[x]*Sin[x] - Cos[2*x] == 0, y[x], x]
DSolve[y'[x] + 2*Tan[y[x]]*Tan[x] - 1 == 0, y[x], x]
DSolve[y'[x] - Tan[x*y[x]] == 0, y[x], x]
DSolve[(y'[x])^2 - 2*x^2*y'[x] + 2*x*y[x] == 0, y[x], x]
example 414.
example 415.
DSolve[3*(y'[x])^2 + 4*x*y'[x] - y[x] + x^2 == 0, y[x], x]
DSolve[x*(y'[x])^2 + (y[x] - 3*x)*y'[x] - y[x] == 0, y[x], x]
example 428.
example 429.
example 430.
DSolve[(x^2 + a)*(y'[x])^2 + 2*y[x]*x*y'[x] + y[x]^2 + b == 0, y[x], x]
example 465.
.....
.....

the list goes on...

let's take the first equation:

DSolve[y'[x] - y[x]^2 - y[x]*Sin[x] - Cos[2*x] == 0, y[x], x]
(*Out: DSolve[y'[x] - y[x]^2 - y[x]*Sin[x] - Cos[2*x] == 0, y[x], x]*)

enter image description here

  • Mathematica = 0.
  • Maple = 1.
POSTED BY: Mariusz Iwaniuk

A upgrade a DSolve function of solving analytically ordinary differential equations on the solutions presented in this Book.

Handbook of Exact Solutions for Ordinary Differential Equations.Valentin F. Zaitsev, Andrei D. Polyanin.

This book contains nearly 6200 ordinary differential equations and their solutions.

In this Book is included 1940 ordinary differential equations and their solutions. Mathematica can solve only 79 percent.

If DSolve can't solve the answer may like this:

eq = y''[x] + f[x]*y[x] == 0;
DSolve[eq, y[x], x]
(*DSolve[y[x] = Exp[Integrate[Y[x], x] + C[1]] -> {Y'[x] == -Y[x]^2 + f[x]->Y[x] == y'[x]/y[x]}]]*)

eq2 = y''[x] + x*Exp[x]*y[x] == 0;
DSolve[eq2, y[x], x]
(*DSolve[y[x] = Exp[Integrate[Y[x], x] + C[1]] -> {Y'[x] == -Y[x]^2 + x*Exp[x]-> Y[x] == y'[x]/y[x]}]]*)
POSTED BY: Mariusz Iwaniuk

I'd like to see a new Heun function.

Heun equations include as particular cases the Lame, Mathieu, spheroidal wave, hypergeometric, and with them most of the known equations of mathematical physics. Five Heun functions are defined as the solutions to each of these five Heun equations, computed as power series solutions around the origin satisfying prescribed initial conditions.

POSTED BY: Mariusz Iwaniuk

Would be very useful indeed!

POSTED BY: Sander Huisman

12.1 introduced: HeunB, HeunBPrime, HeunC, HeunCPrime, HeunD, HeunDPrime, HeunG, HeunGPrime, HeunT, and HeunTPrime

POSTED BY: Sander Huisman

I'd like to see a new function to solve integro-differential equations.

WithThis paper can solve a:

  • differential equations
  • difference equations
  • differential-difference equations
  • fractional differential equations
  • pantograph equations
  • integro-differential equations

Examples only for integro-differential equations:

eq = y'[x] + 2*y[x] + 5*Integrate[y[t], {t, 0, x}] == Piecewise[{{0, x < 0}, {1, x >= 0}}];
IntDSolve[{eq, y[0] == 0], y[x], x]
(*Out: {{y[x] -> 1/2*Exp[-x]*Sin[2*x]}}*)

eq1 = y[x] - 1/2*Integrate[y[t]*x*t, {t, 0, 1}] == 5/6*x;
IntDSolve[eq1, y[x], x]
(*Out: {{y[x] -> x}}*)

eq3 = {y1[x] == x^2 - 1/5*t^5 - 1/10*x^10 + Integrate[y1[t]^2 + y2[t]^3, {t, 0, x}], y2[x] == x^3 + Integrate[y1[t]^3 - y2[t]^2, {t, 0, x}]};
IntDSolve[eq3, {y1, y2}, x]
(*Out: {{y1[x] -> x^2}, {y2[x] -> x^3}}*)

 eq4 = {y1'[x] == 1 + x + x^2 - y2[x] - Integrate[y1[t] + y2[t], {t, 0, x}], y2'[x] == -1 - x + y1[x] - Integrate[y1[t] - y2[t], {t, 0, x}]};
 IntDSolve[{eq4, y1[0] == 1, y2[0] == -1}, {y1, y2}, x, Order -> 3]
 (*Out: {{y1[x] -> 1 + 2 x + x^2/2 + x^3/6 + O[x^4]}, {y2[x] -> -1 - x^2/2 - x^3/6 + O[x^4]}}*)

 eq5 = {f''[x] == 1 - x^3 - 1/2*g'[x]^2 + 1/2*Integrate[f[t]^2 + g[t]^2, {t, 0, x}], g''[x] == -1 + x^2 - x*f[x] + 1/4*Integrate[f[t]^2 - g[t]^2, {t, 0, x}]};
 IntDSolve[{eq5, f[0] == 1, f'[0] == 2, g[0] == -1, g'[0] == 0}, {f, g}, x, Order -> 3]
 (*Out: {{f[x] -> 1 + 2 x + x^2/2 + x^3/6 + O[x^4]}, {g[x] -> -1 - x^2/2 - x^3/6 + O[x^4]}}*)
POSTED BY: Mariusz Iwaniuk

A update a Derivative and Integrate functions of solving Fractional Calculus.

Examples:

FractionalD[nu_, f_, t_, opts___] :=  Integrate[(t - x)^(-nu - 1) (f /. t -> x), {x, 0, t}, opts,GenerateConditions -> False]/Gamma[-nu]
FractionalD[mu_?Positive, f_, t_, opts___] :=  Module[{m = Ceiling[mu]}, D[FractionalD[-(m - mu), f, t, opts], {t, m}]]

Solving derivative.

f[x_] := a*x + b;
FractionalD[1/2, f[t], t] /. t -> x
(*(4 a Sqrt[x])/(3 Sqrt[\[Pi]]) + (3 b + 2 a x)/(3 Sqrt[\[Pi]] Sqrt[x])*)

Solving integration.

FractionalD[-1/2, f[t], t] /. t -> x
(*(2 Sqrt[x] (3 b + 2 a x))/(3 Sqrt[\[Pi]])*)
POSTED BY: Mariusz Iwaniuk

A update a DSolve and NDSolve functions of solving Analytically or Numerically higher-order multidimensional partial differential equations.

Examples higher-order partial differential equations to solving:

pde1 = D[u[x, t], {x, 3}] - u[x, t]*D[u[x, t], x] + D[u[x, t], t] == 0
(*Korteweg-de Vries equation*)
pde2 = D[u[x, t], {x, 3}] == D[u[x, t], t]
(*Dym equation*)
pde3 = a^2*D[u[x, t], {x, 4}] + D[u[x, t], {t, 2}] == 0
(*Equation of transverse vibration of elastic rod*)
pde4 = D[u[x, y], {x, 4}] + 2*D[u[x, y], {x, 2}, {y, 2}] +  D[u[x, y], {y, 4}] == 0
(*Biharmonic equation*)

Examples multidimensional partial differential equations to solving:

multipde1 = D[u[x, y, z, t], t] - D[u[x, y, z, t], {x, 2}] - D[u[x, y, z, t], {y, 2}] - D[u[x, y, z, t], {z, 2}] == 0
(*Multidimensional Heat equation*)
multipde2 = D[u[x, y, z, t], {t, 2}] - D[u[x, y, z, t], {x, 2}] - D[u[x, y, z, t], {y, 2}] - D[u[x, y, z, t], {z, 2}] == 0
(*Multidimensional Wave equation equation*)
multipde3 =  D[u[x, y, z, t], {x, 4}] + D[u[x, y, z, t], {y, 4}] + D[u[x, y, z, t], {z, 4}] + D[u[x, y, z, t], {t, 2}] == 0
(*Multidimensional transverse vibration of elastic rod equation*)
POSTED BY: Mariusz Iwaniuk

I would like to see an option Series for DSolve.

Examples:

eq = {(x^2 + 1)*y''[x] - 4*x*y'[x] + 6*y[x] == 0};
DSolve[eq, y[x], x, Method -> "Series", Point -> (x = a), Order -> 3]
(*Out: {{y[x] -> y[a] + y'[a] (x - a) + (2*a*y'[a] - 3*y[a])/(a^2 + 1)*(x - a)^2 + O[x-a]^4}}*)


ibc = {y[0] == 1, y'[0] == 1};
DSolve[{eq, ibc}, y[x], x, Method -> "Series", Order -> 3

(*Out: {{y[x] -> 1 + x - 3 x^2 - 1/3*x^3 + O[x^4]}}*)


pde = D[u[x, t], {x, 2}] == Exp[Sin[x]]*D[u[x, t], t];
DSolve[pde, u[x, t], {x, t}, Method -> "Series",Point -> {x = a, t = b}, Order -> 2]
(*Out:{{u[x, t] -> 1/2*((x - a)^2*Exp[Sin[a]] + 2*t - 2*b)*Derivative[0, 2][u][a, b] +Derivative[1, 2][u][a, b] (x, a) (t - b) + 
    1/2*Derivative[2, 2][u][a, b] (t - b)^2 + 1/2*(2 x - 2 a)*Derivative[1, 0][u][a, b] + u[a, b]}} *)

ibc = {u[x, 0] == f[x], Derivative[0, 2][u][x, 0] == g[x]};
DSolve[{pde, ibc}, u[x, t], {x, t}, Method -> "Series", Order -> 2]
(*Out: {{u[x, t] -> f[0] + g[0] + f'[0]*x + g'[0]*x*t + 1/2*g[0]*x^2}}*)

ibc2 = {u[x, 0] == x, Derivative[0, 2][u][x, 0] == Sin[x]};
DSolve[{pde, ibc2}, u[x, t], {x, t}, Method -> "Series", Order -> 2]
(*Out: {{u[x, t] -> 1/2*Sin[u]*Exp[Sin[u]]*(-x + u)^2 + Sin[u]*t - t (-x + u)*Cos[u] + u}}*)
POSTED BY: Mariusz Iwaniuk

It would be nice to have a function that solves the Vehicle routing problem.

POSTED BY: Alexey Golyshev
Posted 9 years ago

I would like to see a method for the Feynman-Kac Theorem. It solves a very important class of parabolic PDEs.

Link to Wiki

POSTED BY: Edvin Beqari

An extension of PalindromeQ:

PalindromeQ[n,b]

Which would check if n is palindrome in its base b representation.

17 is not a palindrome in base 10, but it is in base 2: 10001

POSTED BY: Sander Huisman

Binning data with associate data

A common 'problem' is that you want to bin data by (e.g.) the x coordinate, and then you want to have the associated y with it. So to do this, I often do:

x={24,19,49,5,27,100,18,28,77,38,82,22,2,13,12,32,69,72,52,90,16,9,63,64,10,31,51,14,80,70,21,30,71,79,37,65,84,47,33,81,40,94,68,58,11,15,97,88,1,99,74,78,91,93,89,26,45,98,95,67,4,92,29,43,85,39,73,23,8,62,83,57,35,41,17,34,75,25,66,53,44,36,50,60,3,46,86,42,20,56,6,87,55,76,54,48,7,96,59,61};
y={6,64,21,13,34,100,7,89,83,50,19,32,43,38,60,14,1,31,99,40,80,78,68,95,55,72,63,65,91,71,9,51,70,97,37,25,20,52,88,22,62,81,66,69,35,75,29,4,26,27,41,33,93,18,42,98,77,44,85,17,11,12,57,94,61,54,23,28,30,67,10,3,46,45,87,79,96,16,24,73,58,53,48,36,90,76,92,2,82,39,5,59,49,15,74,8,47,84,56,86};
ybinnedbyx=BinLists[{x,y}\[Transpose],{0,100,10},{-10000,10000,20000}][[All,1,All,2]]

i.e. I turn it into 2D data, and then I make a very big bin size in the y direction, such that all the data falls in one bin. Of course this works with numerical data, but it doesn't work if the data in the y direction are strings or a list itself or other....

I see four possible solutions (the 3rd being the most neat):

1

A new binning function that returns lists of indices (BinIndices or BinPositions are good candidate names), so that it can used with Extract on other data. Still a little fiddly because it (presumably) returns a list of list of indices, and Extract does not handle that directly, so you probably have to do some combination of Map, Part, and a pure function.

2

A new option to BinLists, for which I have no good name yet but let's call it BinFunctions for now. By default, if I give some 2D data:

{{1,2},{2,4},{3,1.5},{4,8}....}

it will bin it first by Part[#,1]& of the expression and then by Part[#,2]&. That is, first by the first element, then by the second element... If we could supply our own BinFunctions, then we could do something like Part[#,1]& and 1&. such that all the y data goes in one bin.

3

Reduce the necessity to supply n number of binspecs when we have "vectors" of length n. So:

BinLists[{x,y}\[Transpose],{0,100,10}]] 

Would just only bin it by the first element of each vector, ignoring the rest of the vector... This will keep it backward compatible, so that is very good!

4

A new option for BinLists, that is called something like AssociatedData (and a combiner function?). Now it bins first the data, and then combine the results with the associated data using a combiner function (List by default).

POSTED BY: Sander Huisman

I agree with you. It will be very useful. Recently I had a similar problem. I needed to bin a very big list by {X, Y} and needed information about Z in the every bin. I solved this problem using associations ({X,Y} as a keys, Z as values and Lookup of bins from the BinLists).

POSTED BY: Alexey Golyshev

Or perhaps, like many other functions in the Wolfram Language, BinListsBy as a name!

POSTED BY: Sander Huisman

I would like to add a LocalObject and LocalSymbol to a Mathematica file, including the opened ones (including the one on which we would be running the command). I think this could be a great way of extending these functions, so that we could persistently add plain data to a notebook (without going through DynamicModule/SaveDefinitions unnatural techniques), and a halfway to save sessions.

POSTED BY: Pedro Fonseca

How about touch-screen operations for notebooks?

POSTED BY: Eric Johnstone

Like what? a nice easy-to-access execute button for each cell? Please elaborate...

To be honest, I don't think Mathematica should be used on a touch display, the input is just way slower than by keyboard. Speech/voice I can't see working either.

POSTED BY: Sander Huisman

How about full touch-screen functionality for notebooks?

My new notebook computer has both touch screen and keyboard. I also have a wrist mouse problem, so I'm using the touch screen whenever possible. Normal touch screen functions like scroll and expand/contract could cut down on mousing considerably.

POSTED BY: Eric Johnstone

Your OS doesn't provide a way to scroll with multiple fingers or so? Perhaps some gestures?

POSTED BY: Sander Huisman

I would like a DecimalForm for output. It would recognize numeric subexpressions, and format them as decimal numbers, with a trailing ... on exact numbers. E.G.

DecimalForm[1/3]
(* 3.33333... *)

DecimalForm[x + Sin[2]/Sqrt[3]]
(* 0.524983... + x *)

The function I have in mind would recognize when a numeric expression represents a real algebraic and avoid parasitic imaginary parts.

POSTED BY: John Doty

But what about x/3 ? should that be displayed as x*0.333... ? Or in the middle of an equation: (x+ 1/3)/(x+Pi) would then transform to (x+0.3333...)/(x+3.14159265...) to me, that is harder to read...?

POSTED BY: Sander Huisman

It's an output form you would choose for certain kinds of expressions. I have monsters containing lots of Root constants in mind. If not appropriate for your expression, don't choose it.

POSTED BY: John Doty

I'd like more freedom in the arguments of EdgeForm. For instance, graph-related functions have the option EdgeRenderingFunction that allows the user to plot edges in Graph3D as Tube objects (just as an example). It would be great if there was an easy way to plot edges of objects in a Graphics3D as tubes as well. I realize that this can often be done either by accessing the underlying data and constructing tubes manually, or by replacing all cases of Line in the output. However, constructs for polyhedra like Cuboid, Tetrahedron, etc., are really handy (I don't want to rewrite them), and the underlying Line objects cannot easily be accessed (not sure they exist at all). Even a hardcoded EdgeForm["Tube",radius,color] would be helpful, but something like a completely free EdgeFormFunction->(something) (as an option of all Graphics3D primitives that support EdgeForm) would be even more awesome.

It's possible that a similar argument could be made for FaceForm, but I don't have an example case in mind right now.

By the way, I really like this thread! Thank you for considering our suggestions.

POSTED BY: Bianca Eifert

That would be a nice addition indeed! test

POSTED BY: Sander Huisman
Posted 10 years ago

I would like better support (well, some support, really) for SparseArrays and Strings in Compile; being able to compile recursive functions would be a nice bonus as well.

POSTED BY: Matt Pillsbury

I'm pretty sure recursive functions can be handled. Probably needs the three argument form of Compile that in effect declares the return type.

POSTED BY: Daniel Lichtblau

I would like to have separate functions for testing whether a graph has edge weights or vertex weights. WeightedGraphQ returns True for both. Working with properties (PropretyList, PropertyValue, etc.) is at the moment unreliable so I am not sure what is a robust way to test for this using properties that won't fail in any situation.

POSTED BY: Szabolcs Horvát
Posted 10 years ago

Why PropertyValue is unreliable? Show example please. It's interesting.

EdgeWeightQ[g_]:=!SameQ[PropertyValue[g,EdgeWeight],Automatic]
VertexWeightQ[g_]:=!SameQ[PropertyValue[g,VertexWeight],Automatic]

g=Graph[{1<->2,2<->3,3<->1},EdgeWeight->RandomInteger[5,3]];
{WeightedGraphQ[g],EdgeWeightQ[g],VertexWeightQ[g]}
{True,True,False}

g=Graph[{1<->2,2<->3,3<->1},VertexWeight->RandomInteger[5,3]];
{WeightedGraphQ[g],EdgeWeightQ[g],VertexWeightQ[g]}
{True,False,True}
POSTED BY: Alexey Golyshev

One example: try your function on Graph[{}, {}].

Yes, this is what I am using right now: WeightedGraphQ[g] && PropertyValue[g, EdgeWeight] =!= Automatic. But given the myriads of problems with properties I have no confidence whatsoever that this won't break in some situation. Also note that it is not documented what PropertyValue[g, EdgeWeight] should return for an unweighted graph, so how do I know that this implementation will always work?

There are countless weird bugs with properties, I can't even keep track of them. To tease them out, try the following things: set various properties such as EdgeWeight, EdgeCapacity, VertexWeight, VertexCoordinates on a graph, then set it again on the result. This fails the second time in some cases unless using very specific syntax: SetProperty[g, VertexCoordinates -> { vertexName -> coord}] instead of SetProperty[g, VertexCoordinates -> {coord}]. Then try setting properties with the alternative Graph[g, EdgeWeights -> ...] syntax, which sometimes fails even though SetProperty works.

SetProperty[KaryTree[63], EdgeWeight -> RandomReal[1, 62]] works. Graph[KaryTree[63], EdgeWeight -> RandomReal[1, 62]] fails. Graph[RandomGraph[{20, 62}], EdgeWeight -> RandomReal[1, 62]] works. Graph[Uncompress@Compress@KaryTree[63], EdgeWeight -> RandomReal[1, 62]] works again

Then try setting properties, then deleting or adding edges or vertices. Try different combinations such as adding vertices with EdgeAdd instead of VertexAdd, i.e. EdgeAdd[Graph[{1,2}, {1<->2}], {2<->3}]. Try running various functions on the result. Even when the property setting appears to work, in past versions it has happened that the graph got corrupted internally in some subtle way so certain things didn't work on it anymore (or even crashed the kernel). I don't know which of these are fixed (one crash I knew about was fixed).

In general I have little faith left in the property framework, and currently I can find no clear way, purely based on documented features, to test whether a graph has edge weights. I guess a promise (in the documentation) that PropertyList[g] will never contain EdgeWeight for non-edge-weighted graphs and will always contain it for edge-weighted ones would be sufficient. I don't actually need separate functions.

POSTED BY: Szabolcs Horvát
Posted 10 years ago

Hah! EdgeWeightQ and VertexWeightQ are True for Graph[{}, {}].

Really there are a lot of weird bugs. It's interesting to know. Thank you!

POSTED BY: Alexey Golyshev
Posted 10 years ago

It would be nice to see implementation of Hidden semi-Markov model: functions HiddenSemiMarkovProcess and FindHiddenSemiMarkovStates.

POSTED BY: Alexey Golyshev

I would like to see the following improvements to DendrogramPlot (from the HierarchicalClustering standard add-on):

  • ability highlight the lines of the dendrogram plot instead of the background (see HighlightLevel and HighlightStyle)
  • ability to use different colours for each cluster, i.e. here the two separate green parts should have different colours
  • ability to also highlight the leaf labels according to which cluster they are in (HighlightLevel), without needing to do additional manual work

In general, it would be nice if this function were on part with MATLAB's version. MATLAB also supports optimizing the leaf order.

POSTED BY: Szabolcs Horvát
Posted 10 years ago

A neat way to extract properties from sound objects, such as sample rate, list of samples, etc.

POSTED BY: Rui Rojo

This can now be done using the Audio framework:

AudioSampleRate
AudioData
POSTED BY: Sander Huisman
Posted 8 years ago

Yeah. I was happy with this.

POSTED BY: Rui Rojo
Posted 10 years ago

A function to convert quantities to prefixed units such that the magnitude is between 1 and 1000. So, 0.023V would be 23mV, 2.7*^4 ohm would be 27 kiloohm, etc.

POSTED BY: Rui Rojo

I would like to see a few specific improvements to graph drawing / graph layouts.

First I have to say that Mathematica's automatic graph layout is very good compared to the competition. It usually manages to automatically select a layout algorithm that produces a nice result, it is fast, and it adds polishing touches such as rotating the graph to fit in the notebook well, and it has lots and lots of different methods to use. But it also has limitations.

So I would like to have:

  • Better documentation!! The General Graph Drawing tutorial (from v5.x!) should be updated, the edge layout methods (completely undocumented now) should be documented, etc.

  • Many graph layout algorithms sequentially update vertex positions. There should be a way to provide starting positions. This is so that incremental changes to graphs can be visualized. E.g. if I add just one edge, there should be a way to avoid significantly changing the positions of non-affected vertices. Example:

enter image description here

(EDIT: Now I realize that in the example above 9 and 10 were exchanged too. This feature request encompasses all settings that make it possible to visualize incremental changes, including disabling vertex permutations if any algorithms do that ...)

  • Graph layout and visualization should be fully decoupled from other functionality and from graph handling. Functions should not waste time on computing layouts when I don't visualize the graph (NeighborhoodGraph being one notorious example which runs extremely slowly because of this!!). Also, graph visualization should not happen automatically for large graphs (and potentially lock up Mathematica). I should not have to take extra care to always add a semicolon and avoid showing the graph when I'm working with large networks. For large networks, visualization should only happen on demand, not automatically.
POSTED BY: Szabolcs Horvát

I have similar feelings; I'm always struggling to get the vertices to be where I want them to be. For example I have a graph with all my flights:

enter image description here

Now it would be great if I could move CDG between TLS and ZAG! I'm quite sure this is now not possible; I'd like a way to 'tweak' the layout once the algorithms have done their work...

POSTED BY: Sander Huisman

I would like SemanticImport to support data tables with both column and row labels. A CSV could look like

,A,B
C,1,2
D,3,4

where A and B are column labels and C and D are row labels. It already supports data with only column labels quite well. If this feature is added, it would be useful to be able to trigger it manually (instead of fully relying on automatic detection).

I know that this simple example I showed could be read as SemanticImportString[..., {"String", "Number", "Number"}, HeaderLines -> 1] (with a bit of post-processing), but this sort of data tends to have a large number of columns instead of just a few. Also the number of columns might not be known ahead of time (unless I count manually). I'm hoping for something that can easily handle a 500 by 500 table with row and column labels.

In some fields it is common to have data in this format for (dense, weighted) adjacency matrices, and the vertex labels are very important. While it would be better to distribute this sort of data in a proper graph-oriented format, that is not what is happening in practice and that's not what I need to deal with.

POSTED BY: Szabolcs Horvát

"HeaderColumns" -> n would be useful indeed!

POSTED BY: Sander Huisman
Posted 10 years ago

I would like to see method "ConvolutionalNeuralNetwork" in Classify. I know that ImageIdentify is based on CNN, but I would like to train CNN on my own images.

Also I would like to see improvements of documentation for Predict and Classify - there are a lot of undocumented functionality. For example:

{"NeuralNetwork", "HiddenLayers"-> {{4, "RectifiedLinear"}, {3, "Tanh"}, 3}} and all supported layers are {"LogisticSigmoid", "RectifiedLinear", "Tanh", "SoftRectifiedLinear", "Linear"}

Currently I need searching such information using Options or on the StackExchange.

POSTED BY: Alexey Golyshev
Posted 10 years ago

List needs 24 bytes to store values in it.

ByteCount@{}
40
ByteCount@{1}
64

I would like to see IntegerArray, RealArray, ComplexArray in analog of existing ByteArray and undocumented RawArray.

But also I would like to see overloading of all operators (+, -, *, /) and support in all functions (Min, Max etc.) Examples:

ByteArray[{1,2}] + ByteArray[{3,4}] = ByteArray[{4,6}]

ByteArray[{1,2}] + 1 = ByteArray[{2,3}]

ByteArray[{1,2}] + 1.0 = RealArray[{2.0,3.0}]
POSTED BY: Alexey Golyshev

If you are looking for efficient storage and better performance, then this is already available as packed arrays:

Packed arrays work transparently with many different functions and are meant for numerical computation.

POSTED BY: Szabolcs Horvát
Posted 10 years ago

Thank you. I know about PackedArrays but my idea is little bit different.

ByteCount@ToPackedArray[{}]
40

ByteCount@ToPackedArray[{1}]
112

RawArray supports array types {SignedInteger8, Byte, SignedInteger16, Bit16, SignedInteger32, UnsignedInteger32, SignedInteger64, Real32, Real, Complex128}

ByteArray is equivalent to RawArray["Byte",data]. Overhead is different but the size of one data object is 1 byte. I would like to see simular functionality for integers and reals. And operators overloading as for PackedArrays.

ToPackedArray[{1, 2, 3}] + 1
{2, 3, 4}

Not this:

ByteArray[{1, 2, 3}] + 1
1 + ByteArray[< 3 >]
POSTED BY: Alexey Golyshev

Packed arrays do store data efficiently: 8 bytes for an Integer (4 bytes on 32 bit systems) and 8 bytes for a Real. What you are seeing when you measure an empty array is a small constant overhead and possibly some inaccuracies in how ByteCount reports (?). Try measuring a much larger array to see this.

POSTED BY: Szabolcs Horvát
Posted 10 years ago

You are right, packed arrays do store data efficiently. WRI integrates multiple functions from the packages into the kernel. ByteArray was introduced in 10.1 but it doesn't support addition, multiplication as the packed arrays. My idea is to make operators overloading and integrate RealArray and IntegerArray into the kernel. Now of course I can use packed arrays from the "Developer`" package.

POSTED BY: Alexey Golyshev

Sorry, I still don't understand what functionality you are expecting from such a "RealArray" that packed arrays don't already do. Packed arrays are not provided by any package, they are integrated into the kernel. Developer` is just a context containing some functions, not a package. They couldn't possibly be provided by a package and still have such a deep level of integration: packed arrays are the basic data structure for compiled functions and LibraryLink too; automatic and transparent packed array handling is the reason why Mathematica can be fast with numerical computations on large arrays.

Adding arithmetic for ByteArrays is of course something different and unrelated. That's something Mathematica doesn't currently have.

POSTED BY: Szabolcs Horvát
Posted 10 years ago

When I was writing my post, I thought about RawArrays and different types and sizes of data for using the memory even more efficiently than with the packed arrays: SignedInteger8, SignedInteger16 etc. But you are right, packed arrays are sufficiently effective.

WRI can register only the one wish: "Adding arithmetic for ByteArrays"

P.S.: sorry for my English

POSTED BY: Alexey Golyshev
  • Operator form of GatherBy
  • Operator form of Nest
  • Operator form of Fold (now available in Version 11)
POSTED BY: Sander Huisman
Posted 10 years ago

Operator form of StringMatchQ and the like would be great, too.

POSTED BY: Matt Pillsbury

This has been added in 10.4 (or perhaps a version before that?)

POSTED BY: Sander Huisman
Posted 10 years ago

I would like to see the following functionality to improve Mathematica's capabilities for data science and numerics

1) Algorithmic Differentiation (also known as Automatic Differentiation) functionality.

2) More functions that are Compilable. In particular, functionality that has no symbolic use, such as random deviates from distributions can be made compilable so that simulation based inference can be fast. In fact, Mathematica was way ahead of other languages (Julia, Python etc. ) in terms of JIT compilation and compilation options to C, but somehow has not been able to fully leverage this advantage.

POSTED BY: Asim Ansari

I'd like to see Random Correlation and Covariance matrices. With RLink, the need for this functionality is somewhat reduced, since R has its "clusterGeneration" package that does this, but, still, it would be very useful to have this as built in Mathematica functionality. One could combine the results from this function with things such as MultinormalDistribution to produce populations with correlated features (just as in real life).

Possible usage

 Options[RandomCorrelationMatrix] ={Method->"Onion",
  CorrelationDistribution->TransformedDistribution[Rescale[x,{0,1},{-1,1}],Distributed[x,BetaDistribution[1,1]]}
 RandomCorrelationMatrix[d_Integer,OptionsPattern[]] 
POSTED BY: Seth Chandler

The command completion has a preference setting: "Match case in command completion". I switched this one off.

But it does autocomplete (show suggestions) when I type Ran but not when I type ran, can someone confirm? Is this as intended? Or is this only for options? I'd like a feature request for also command completion for that...

This setting seems to be gone now.

POSTED BY: Sander Huisman

I would like to see more consistent colour schemes between different plotting functions. The default is yellow, blue, green for some, but it's blue, yellow, green for others.

Now suppose I'm in a discussion with someone and I need to show them some ideas quickly. I make a few plots, always using the data variables in the same order, but they get confused and annoyed:

So is the "foo" yellow and the "bar" blue or is it the reverse? But in the other figure it was the opposite! Why can't you just keep it consistent?

enter image description here

POSTED BY: Szabolcs Horvát

Wow! I never noticed!! crazy indeed! But they appear to be based around:

ColorData[97] /@ Range[3]

Which the Plot function uses, but the other swap them somehow?! I would almost call it a bug...

POSTED BY: Sander Huisman

I have typed Range[Length[x]] a zillion times in my Mathematica life. It would be nice to have a function RangeLength[x] that did the same thing. I suppose it could be named IntegerCorrespondence[x,seq_:{1,1}] so that if we wrote

 IntegerCorrespondence[{"apples","bananas","fruits"},{3,2}]

we would get

{3,5,7}

or if we wrote

 IntegerCorrespondence[{"apples","bananas","fruits"}]

we would get

{1,2,3}

POSTED BY: Seth Chandler

Isn't it just as easy, or even shorter, to type Range@Length[...]? Or if there's a need to refer to the function without calling it then Range@*Length. The same goes for Reverse@Sort.

POSTED BY: Szabolcs Horvát

@Sazabolcs that is a good suggestion. I always do this and add also CTRL + Q to Quit current kernel.

POSTED BY: Rodrigo Murta

I would like to see builtin keyboard shortcuts for entering \[LeftDoubleBracket] and \[RightDoubleBracket].

The first thing I do after installing Mathematica is adding these shortcuts to KeyEventTranslations.tr. This is of course unsupported, and unfortunately in 10.2 the same shortcuts I used to use are now assigned to something else. These characters improve the readability of code significantly and having a shortcut for them would be a great improvement. Esc [[ Esc is just unrealistically slow to type.

POSTED BY: Szabolcs Horvát

I would like to have Mathematica to convert [[ to \[LeftDoubleBracket] while you type. Just as it does it for -> to a \[Rule]. No need for special short keys! Just a checkbox in the preferences would be nice.

POSTED BY: Sander Huisman

That might be a problematic thing to implement in a non-annoying way because the system can't tell for sure what you mean when you are typing the closing ]]. Is it ] ] or a double bracket? It would have to match the opening one, of course, but during editing things don't always match perfectly.

POSTED BY: Szabolcs Horvát

Indeed some checking is necessary. But I think this mechanism is already in place, also for the -> combination: it only appears after you type another character after it...

POSTED BY: Sander Huisman

What they could do is: If you type a ] character check is there is a [[ ]] pair (where the last brackets of these 4 was the last character typed) and then replace both at the same time...

POSTED BY: Sander Huisman
Integrate[f,x,y];
Integrate[f,x,y,z];

for Integrate[Integrate[f,x],y] ... This usage works already but is missing in the documentation. When Integrate is used as an indefinite integral it is, of course, an inverse operation to the differentiation D. Just as D[f,x,y] makes sense (and is documented), Integrate[f,x,y] makes sense too and should be documented.

For all functions f which allow to find Integrate[f,x,y] one has an exact formula for

Integrate[f,{x,y} el Rectangle[{x0,y0},{x1,y1}]

(not clear why Esc el Esc does not work for the set element symbol). So it should not happen, as it does for

Integrate[1/Sqrt[r*r + x*x + y*y], x, y]
-x + r ArcTan[x/r] - r ArcTan[(x y)/(r Sqrt[r^2 + x^2 + y^2])] + 
 y Log[x + Sqrt[r^2 + x^2 + y^2]] + x Log[y + Sqrt[r^2 + x^2 + y^2]]
rec = Rectangle[{-1, -1}, {1, 1}]
  Integrate[1/Sqrt[x*x + y*y + z*z], {x, y} \[Element] rec]

\!\(\*UnderscriptBox[\(\[Integral]\), \({x, y} \[Element] 
   Rectangle[{\(-1\), \(-1\)}, {1, 1}]\)]\)1/Sqrt[x^2 + y^2 + z^2]

that the two-fold indefinite integral can be done and the definite integral over a rectangle remains unevaluated. Here the case of two variables was taken as a pattern for any number of variables.

POSTED BY: Ulrich Mutze

Mutze, I am not sure I understand - does not this already work?

Integrate[4 x y, x, y]

x^2 y^2

Also for your tougher example:

rec = Rectangle[{-1, -1}, {1, 1}];
FullSimplify[PowerExpand[Integrate[1/Sqrt[r + x^2 + y^2], 
{x, y} \[Element] rec, GenerateConditions -> False]]]

enter image description here

POSTED BY: Sam Carrettie

Dear Sam, thank you for your solution. I have to learn this: whenever a Mathematica function does not work as expected, look for the available options. They may make the thing work. In the case of my example this was the 'GenerateConditions -> False' option. Perhaps the documentation should indicate that this option may make the difference between failure and success. That the many arguments syntax really works for Integrate was already stated in my request. My point concerning missing documentation concerning this variant seems still be valid.

POSTED BY: Ulrich Mutze
Export[blahblah, "Markdown"]
Import[blahblah,"Markdown"]

Markdown is being used A LOT these days in languages such as R. There are many things one can do with Markdown text that are difficult in the Mathematica front end and, it would be useful to be able to work more seamlessly between Mathematica and things like Project Jupyter. By way of example

Export[ Cell[TextData[{ "This is a ", StyleBox["very ", FontSlant->"Italic"], StyleBox["interesting ", FontWeight->"Bold"], "idea. " }], "Text", CellChangeTimes->{{3.651086579454398^9, 3.651086616516877^9}}],"Markdown"]

might yield something like

This is a *very* **interesting** idea.

And something like this

Export[ Cell[BoxData[ RowBox[{"Solve", "[", RowBox[{ RowBox[{ RowBox[{ RowBox[{"a", " ", SuperscriptBox["x", "2"]}], "+", " ", "bx", " ", "+", "c"}], "==", "0"}], ",", "x"}], "]"}]], "Input", CellChangeTimes->{{3.651086627422246^9, 3.651086642785838^9}}],"Markdown"]

might yield something like

\text{Solve}\left[a x^2+\text{bx}+c=0,x\right]

With the answer looking like this

\ $\left\{\left\{x\to -\frac{\sqrt{-\text{bx}-c}}{\sqrt{a}}\right\},\left\{x\to \frac{\sqrt{-\text{bx}-c}}{\sqrt{a}}\right\}\right\}\$

(By the way, in writing this comment up, I realized that the editor actually has the ability to translate out of LaTeX and Markdown. Cool.

Anyway, this idea is rather inchoate, but Mathematica and its Front End ought to play better with Markdown.

POSTED BY: Seth Chandler
NashEquilbrium
NashEquilibriumStrategies
NashEquilibriumValues

These functions would take a strategic (normal) form game and yield an Association which held the equilbrium strategies (both pure and mixed) and the values to the players from playing that strategy. It would work on symbolic games i.e. where the payoffs were symbols and not numbers ) by returning a Piecewise function (or something similar) showing the logical conditions under which each of the strategies would be a Nash equilibrium.

The function would work for n-player games.

This function would need some specification of a data structure to represent a game. Might be done via a mapping of strategy combinations onto values, some sort of tabular structure, or in some other way. It would also have a nice printout of the game in bimatrix form (for two player games) or in a list of matrices for n-player games.

I recognize this is a non-trivial undertaking. I am basically asking for a development project as much as a function, but I still think it would be a good idea. (There's a now-very old package out of New Zealand that has some of this functionality, but it needs a lot more development.)

POSTED BY: Seth Chandler
ReverseSort
ReverseSortBy

Now, obviously one can code this oneself either through wrapping the Sort in Reverse or by using a different second argument to Sort, but I think the ability to do this directly would be a useful simplification and be more expressive.

POSTED BY: Seth Chandler

ReverseSort is now (V11.1) added to the Language. Note, however, that it is not the same as Reverse[Sort[...]] As this would reverse the ties. Ties are kept in the original order.

https://reference.wolfram.com/language/ref/ReverseSort.html

POSTED BY: Sander Huisman

A @= function like the += -= *= counterparts.

b@= f would be equivalent to b = f[b].

Now because Map and Apply and many other functions can be operators you can very nicely apply these to variables:

b = Map[b,f]
b = SortBy[b,Last]
b = TakeSmallestBy[b,Norm,3]

would be:

b @= Map[f]
b @= SortBy[Last]
b @= TakeSmallestBy[Norm,3]

Especially if one could use it with Part, that would be very useful:

b[[All,1]] @= Map[Norm,b[[All,1]]]

would be:

b[[All,1]] @= Map[Norm]

Regarding the name: not sure yet: FixTo?? (like Prefix and Postfix and Infix)

POSTED BY: Sander Huisman

Now implemented as //= (ApplyTo).

POSTED BY: Sander Huisman

TilingData which giving: Symmetries, (translation vectors if they are regular), 'unit cell', 'image', generator with a certain size filling a circle/box (region), 'dual'  et cetera...

This is already available in WolframAlpha i think...

POSTED BY: Sander Huisman

A nice addition to FilledCurve and JoinedCurve would be to support circular arcs of the Circle command.

POSTED BY: Sander Huisman

I wrote this in 2012. Since then we got all kinds of geometric objects but still no 3D circle.

POSTED BY: Sjoerd de Vries

How about real-time audio processing? There is pretty aged support for capturing video and manipulating in real-time, but not for audio.

Another function that would be very useful, but not so trivial to implement (I've tried) is MinkowskiAddition. See the example:

enter image description here

Basically it is the addition of two lines or two regions to create a third. In the example it is the addition of the two cat-eye-shaped wedges to create the red shape.

This can be very useful in offsetting a path for example.

POSTED BY: Sander Huisman

Added in 12.3 as RegionDilation

POSTED BY: Sander Huisman

I would like to see LibraryLink provide functions for simple operations that are commonly needed when creating interfaces to other libraries. In particular, I would like to have a function that transposes data while copying it (to an MTensor).

Most libraries store matrices in column-major order (as some distant Fortran-heritage), which differs from Mathematica's row-major preference. This makes frequent transposition necessary when working with these libraries.

A naive double-for-loop transpose does not perform well. Something that is written to make use of SIMD instructions (vectorization) and with cache-effects in mind will perform much better. Rolling your own is not at all trivial. I would like to see LibraryLink provide a function like this out of the box.

Related

POSTED BY: Szabolcs Horvát

I'd like to see something like the Matlab code for convex optimization (CVX).

POSTED BY: Frank Kampas
Posted 10 years ago

It would be a nice if modest improvement if XMLTemplate worked with XMLElement or XMLObject arguments as well as string and File arguments. It's not hugely difficult to stick an ExportString in there, but it seems a little weird that we have to do so.

POSTED BY: Matt Pillsbury

I would like to have a set of basic functions that make it possible to build my own importers for various text based file formats. Mathematica is quite lacking in this regard as it doesn't even have a fast function to convert the usual floating point format 1.234e5.

There's of course the classic Import, which tends to be quite fast on simple formats like CSV, but sometimes it wants to be too smart and ends up breaking things. There's a need to be able to specify more precisely what format I expect and turn off any automatic guessing.

Then there's SemanticImport, which happens to solve the problem I linked to above. We can specify the precise type of each column, which is a step in the right direction, exactly what I want. But it's slow---much slower than Import, and I simply cannot use it for large files, files that are handled fine by import. This is strange because knowing the expected column formats should in principle make things easier and simpler for the importing function ... There's also the thing that while SemanticImport fixed some of the problems of Import, it takes other "problematic" choices even further: it tries to guess even more about what the user wants, it won't let me specify if I want CSV or TSV, etc. In some cases the guesses are bound to be wrong, which is why I also want something that is dumb but follows instructions precisely. SemanticImport is great and convenient but it's too smart for me to trust it blindly ...

And finally there's Interpreter, which is really great, and looks just like the thing I'm asking for. Except that it's awfully slow and wasn't really designed for this task (ref)

Witness:

str = ExportString[RandomReal[1, {100, 100}], "CSV"];
Map[Interpreter["StructuredNumber"], 
   StringSplit[#, ","] & /@ 
    StringSplit[str, "\n"], {2}]; // AbsoluteTiming

{8.972, Null}

"SemanticNumber" is probably the fastest non-string type that Interpreter can read, yet it's almost unusably slow for reading even small CSVs.

Compare Internal`StringToDouble:

In[10]:= Map[Internal`StringToDouble, 
   StringSplit[#, ","] & /@ 
    StringSplit[str, "\n"], {2}]; // AbsoluteTiming

Out[10]= {0.011367, Null}

It's 1000 times faster. But it's undocumented and it has no error checking to detect when the input is not really a number ... I could of course roll my own with LibraryLink, but this really really shouldn't be necessary ...

I am currently working with relatively small (smaller than 100x100) CSV files that have both row and column labels. Import messes up things, I have no more trust in it. SemanticImport fails in multiple ways, including failing to read reals when it automatically guesses that something should be integer. It also takes more than a second to read a 90x30 table, which makes me wonder if it would be usable at all for a 500x500 one ... I cannot change the file format because I'm sharing the files with others who use other tools (mostly R).

I had to resort to writing my own CSV importer to read the files reliably, but it takes 5 seconds to read just this small file.

There simply isn't any way to read these relatively simple labelled matrices in Mathematica quickly, easily and reliably. R can read them with no problem as a proper dataframe but resorting to RLink for something this simply seems very silly!

http://mathematica.stackexchange.com/questions/56876/parsing-a-csv-file-using-interpreter

POSTED BY: Szabolcs Horvát

I also find the import generally slow, especially compared to Matlab. I always convert my data to Real32 or Real64 binary data, those you can read in really fast (direct copy to memory), but it does not feature structured data of course...

POSTED BY: Sander Huisman

I'm sure you're aware of this, but just for the sake of completeness: There's also still the good old clunky RegisterImport workflow for cases where you need to implement import from a text-based format from scratch. I have no experience with the speed of this approach though.

POSTED BY: Bianca Eifert

Can we have a "fit to page" option for printing? I always seem to end up with twice the number of pages that would be necessary, half of which are empty or carry a single letter. An option not to cut graphics in two, but to automatically insert a page break where necessary would be nice as well. BTW can we have page setting changes (A4 to A3) that actually stick and work?

POSTED BY: Sjoerd de Vries

I would like better auto-completion for file names:

  • Completion relative to the current kernel directory (Directory[])
  • The ability to add file-name completion for user defined functions.
POSTED BY: Szabolcs Horvát

Yes, please! I could definitely use that feature.

POSTED BY: Bianca Eifert

Recently lots of fairly basic functions have been added, that seems to serve only as shorthands as they can easily be implemented using already existing functionality.

I would like to see functions to zero out the diagonal of a matrix, or to replace it with something else. This sounds like a simple task too, but implementing it in a way that is fast, robust and memory efficient turned out to be not so easy after all. The only good solution that is also simple so far is the one in @ciao's answer on Mathematica.SE. Given that this is not at all obvious, we could really do with a builtin version.

POSTED BY: Szabolcs Horvát

A way to control the interpolation for function like ListContourPlot:

enter image description here

It would be nice if one could give a MaxCellSize -> Somenumber in order to get the following triangulation:

enter image description here

Normally the graphic will be completely filled in the ConvexHull of all the point, but this generally created very elongated triangles at the edges...

POSTED BY: Sander Huisman

Dear All

I made a small list of things I would like to see:

  • Voronoi in 3D
  • Voronoi not limited to points, also support for lines and regions
  • Map operator with level specification: Map[f,"Level" -> levelspec]
  • GeoDistance to accept a list of points (e.g. a trail) giving all the distances for each pair. i.e. a fast implementation of:

    GeoDistance[locs_List] := BlockMap[GeoDistance @@ # &, locs, 2, 1]

    This is now Implemented as GeoDistanceList since V10.4

  • MinMax to work on Interval objects like Min and Max does. (Implemented in V10.3)

  • Casting of shadows in Graphics3D
  • ClippingMask there you can use a polygon to clip some other Graphics object...
  • GPSForm / GPSString to format GPS coordinates
  • General Backtracking algorithm
POSTED BY: Sander Huisman

Hello, I want bayesian networks Now, we have Bayesian statistics and networks André Dauphiné

POSTED BY: André Dauphiné

This is not exactly a new function, but a request for more consistency on everything that is related to layout and printing:

-> when activating page breaks, everything becomes slow

-> there seems to be no way of influencing the page breaks with what's opened, closed or selected

-> all screen updates between setting new page format, page breaks, etc are completely out of phase (I set A3, go back again, and it says A4, although ruler is showing A3, or sometimes gets stuck on A4, etc).

-> print preview doesn't allows for "selection" option (on Windows)

-> the screen zoom actually influences the outcome of the printing, even when everything set to a printout environment, and there are no magnification related configs (Dingbats get misaligned and cut).

-> on printout environment, ctrl+mouse wheel reduces magnification, in whichever direction we turn

-> Grid forces width but doesn't force height, which feels odd

-> less bugs on Grid, Backgrounds, etc.

-> well, summing up the two last: some clean-up of the Grid, Pane, Item, etc functions. A lot of options are available, but somehow I get the feeling (which is supported by the amount of post from both communities), that we end up always a small step from being able to do what we really wanted. For instance, although Spacing, Margins, etc, all make sense, in real cases, I always get bitten by an extra pixel here and there, that doesn't get out, or with a magnification dependency, etc. And, if there's a Scaled function, couldn't there be a Imaged and an Itemed equivalent?, or a ItemSizeOverride-> {w,h}, or ImageSizeOverride->{w,h}, for functions that have the opposite type as parameter and not option?

-> Something as simple as Grid[{{Item[Rotate["this is a long test example",Pi/2],ItemSize->{3,50}]}},Frame->True] has what I would call an odd behaviour

-> and on this one, the rendering is not the same: Grid[{{Style["ABC abc",15,Bold],Rotate[Style["ABC abc",15,Bold],Pi/2]}}]

I imagine WR is completely aware of this type of inconsistencies (at least, I see WR employees suffering from them when presenting stuff on live), and so, there's no point on going on (I do send bug reports whenever I find new stuff). But I wanted to present my frustration. Doing most type of analysis in Mathematica is extremely fast. But every single time I need to prepare results into some kind of printable report, I end up crossing, not one, but a few bugs (I really mean bugs, and not simply difference of opinions). Also, thinking that I should just deliver a CDF is unrealistic; to start with, the majority of the companies have the computer admin rights blocked, and installing a new software passes through a process of information risk assessment... and putting it accessible through an online service is a no go in a commercial environment.

POSTED BY: Pedro Fonseca

Adding a MaxStepSize option to Plot, so that it would calculate the initial setting for PlotPoints automatically would be convenient.

It would effectively set PlotPoints -> 1 + Ceiling[(x2 - x1)/maxstep] or to the default PlotPoints, whichever is greater.

POSTED BY: Michael Rogers
Circular Hough transform, and possibly the extension to ellipses or even arbitrary shapes.
Same implementation as Radon
POSTED BY: Sander Huisman
Posted 11 years ago
Functionality like C++ ostringstreams. Currently it's possible to treat a string as an input stream, but there's no straightforward way to send output to a stream and turn it into a string (you can always right to a temp file and read the file, but that's pretty yucky).
POSTED BY: Matt Pillsbury
In Mathematica 9 there is a way to define custom streams, though I haven't used it so I'm not deeply familiar with the API. 

If you need to write to a string, ExportString works.

The problem with repeatedly writing to a string is that stings are immutable in Mathematica, so this paradigm doesn't fit it very well.  Of course you can always create a new string by appending to an existing one.
POSTED BY: Szabolcs Horvát
Posted 11 years ago
Better support for functional programming. *Mathematica* is pretty good, but there's definitely room for improvement, especially if you need to, say, build a list in a recursive function. Things that would help include:

* Lists built out of cons cells, perhaps constructed using a function named LinkedList or ForwardList, which could be converted to a normal list using Normal. You can do this now, of course, but you either have to use some made-up head for your conses, or be very careful if you're using two-element Lists, and in either case there isn't direct support from functions like `Map`, `Fold`, `Select`, et c.
* Functional data structures for sets (and maybe maps). The obvious example would be red-black tree, but there's a universe of options out there. Again, a key advantage would be integration with existing set-oriented functions like `Union` and `Complement`. `Again`, Normal could be used to turn them back into a list

The key advantage I'm looking for is the ability to efficiently add or delete elements one at a time, which is really convenient in a lot of circumstances, but is really kind of clunky to do in *Mathematica* as it exists now in a way that's efficient. 

Also, `$IterationLimit` should default to `Infinity`; currently if you want to use (tail) recursive approaches, you have to wrap everything in a `Block` to keep from stopping prematurely on all but tiny problems.
POSTED BY: Matt Pillsbury
It would be great to have option to create Formated Excel files, with control over:
    1- Cell color
    2- Font Color
    3- Column size
    4- Border color
    5- Export ascreating Excel Tables Object.
    6- Cell comments
    and so on...

It would be nice to just create a formated Grid, and when export it using Export["myexcel.xlsx",Grid[myData, A lot of formated options]], that this data could come to excel already formated. I kwno I can use VBScript to do that, but I would like to do all from inside Mathematica, in a more clean/simple way.
I automate a lot of reports today using Mathematica, but they are very ugly, due to the lack of format capabilities.
I know about ExcelLink, but it's not a direct options to me because it needs Excel instaled, and works just for Windows. I would love to have this improved Export version.
If there are another options, please, let me know.
 
POSTED BY: Rodrigo Murta
Posted 11 years ago
Along the same lines as better Excel exports, being able to do any exports to PowerPoint would be really helpful. PowerPoint is somewhat terrible, but it's also the lingua franca of many corporate environments. It would be nice if these improvements extended to better export to WMF/EMF formats, which tend to be pretty shaky.

Also, while we're improving Grid exports, better support for exporting Grid forms to HTML would be fantastic. 
POSTED BY: Matt Pillsbury
Deselect

We have Cases[expr,pattern] and DeleteCases[expr,pattern]

Why not Deselect[expr,criterion]?

(* standard caveat, yes, easy to write your own... *)
POSTED BY: W. Craig Carter
The built-in debugger has a very bad reputation, but in my experience it is quite usable except for one key area: breakpoints.  If I don't try to use breakpoints, then it's a very useful tool.  Instead of breakpoints it's possible to simply trigger a message or use Assert or similar.  (I don't mean that there isn't room for improvement, just pointing out the currently usable solutions.)
POSTED BY: Szabolcs Horvát
I've been using Mathematica for 12 years or so, and I haven't figured the debugger out yet (for real!). It is just too damn confusing! I would love to see a good video about it, and try to use it ;)
POSTED BY: Sander Huisman
Posted 10 years ago

Because of the scarce documentation, I sent an inquiry about the built in debugger to tech support about a year ago. The response I got was that they recommended I use Workbench, implying the built in debugger was more or less abandoned.

POSTED BY: David Keith
Maybe it's worth starting a separate thread about that.  It would be interesting and useful to figure out a good workflow.  I use it occasionally, but not often, and almost never with my own code.  But I don't try to use breakpoints any more---bad experiences with those.
POSTED BY: Szabolcs Horvát
http://www.dbaileyconsultancy.co.uk/debugtrace/debugtrace.html
POSTED BY: Frank Kampas

About debugger. I have made this here: community.wolfram.com/groups/-/m/t/250326

POSTED BY: Rodrigo Murta
I would like to see the Profiler and the MUnit unit test framework being moved from Workbench into the core Mathematica.  Neither of these are inherently tied to the Workbench.  In fact I already use MUnit separately from the Workbench sometimes because I prefer to do package development in other plain text editors (such as Vim), and MUnit can be used from within a notebook too.
POSTED BY: Szabolcs Horvát
@Szabolcs. This would be cool!
POSTED BY: Rodrigo Murta
I think we all want break-points, and some simple command to go step-by-step, and hover over any variable and check their current value...
POSTED BY: Sander Huisman
Posted 11 years ago
I would love to see a gui change. I would most appreciate a sidebar, where all functions can be found, and by clicking on them being automatically transfered to that part of my code.
It would save me time from scrolling up and down .
POSTED BY: Tom Zinger
That would indeed be awesome! Furthermore if you right-click on any function it could say 'show definition' either going to the built-in help or your own definition.
POSTED BY: Sander Huisman
Relational algebra operations would be a good start.  Also, the query optimization done by SQL, such as Microsoft's, is very crude.  I'm sure the Wolfram Language could do a better job.  Given the goal of extending the usage of the Wolfram Language, relational database querying would be a big step in that direction.
POSTED BY: Frank Kampas
What kind of databases? Relational databases are kind of inherently stuck with SQL-like queries or things you'd find in relational algebra.

Are you looking for operations which are composition of relational algebra operations, like some kind of higher order SQL?

Or are you looking for functionality for non relational databases like graph databases
POSTED BY: Sean Clarke
I'd like to see functionality for querying databases, above and beyond what is provided by SQL.
POSTED BY: Frank Kampas
Also it would be great if
BinLists and BinCounts
where to be extended to:
BinIndices
(and that it returns in such a way that you can use e.g. extract on it)

A lot of times you want to bin your data X, and get the 'associated' data Y. i.e.:
{{-,-},{-,-},{-,-},{-,-},{-,-},{-,-},{-,-},{-,-},{-,-},{-,-},{-,-},{-,-},{-,-}} 
Say, you have these data-pairs. then you would like to bin on the first value, and get the second values for each bin. What I do now is to bin in 2D, and have the binsize in the second dimension very very large. But this will only work with numbers, if is it anything else it does not work.
POSTED BY: Sander Huisman
I needed this functilonality last week. Not so easy to do from zero.
Relatad SE post: http://mathematica.stackexchange.com/questions/17734
POSTED BY: Rodrigo Murta
Or an option for BinLists, that you can specify a pure function as argument (or multiple) and that it bins over each of thse functions-answer. 

So something like:

BinLists[{{x1,y1,z1},{x2,y2,z2},{x3,y3,z3},{x4,y4,z4},{x5,y5,z5},......},{0,100,10},{0,200,10},BinFunctions ->{First, Part[#,2]&}]

I.e. this would bin the data x,y,z ....first over the x values in bins from 0 to 100 in steps of 10, and then over the y values in bins from 0 to 200 in steps of 10. Of course one could specify more difficult functions, let's say Total, or Mean.....With that you make very elaborate binning and would remove the need of BinIndices or so.
POSTED BY: Sander Huisman
Access to the player’s kernel, from other interfaces (custom, Excel, etc), to call for functions programmed/prepared with the full Mathematica version.

No need to point out the “security risks” involved... But the need is huge from everyone that is forced to live in the Excel world. Also applicable for any kind of interaction with different local applications (CAD, etc), where server side computation or cloud computation is a no go: too “heavy” for common simple projects (small teams, etc), too open for confidential cases, too slow for thousands of individual requests (for instance, from Excel calculation update), local work where internet is still rare, etc.. Or where the project dimension doesn’t “cover” for multiple Mathematica licenses.

I go even further, and specify the difference between free player and player pro (to protect myself against the "only pro solution"). The free version pops up a pub message at the corner of the screen, every 30 min, while on use, saying “Wolfram Research, supporting your computations”, the pro version, or the enterprise "produced" version only does it once at the beginning of the session.

;-)

PS – the community interface still has some interesting bugs. This same post that you are reading was previously substituted by the first entry from Rodrigo
POSTED BY: Pedro Fonseca
An option for Interpolate to not extrapolate

Interpolate returns extrapolated values when it arguments are out of the range. However, there are cases where I would prefer that an Interpolation function would automatically clip.

Consider this "mistake":
data = Table[{x, Sin[x + RandomReal[{-0.1, 0.1}] + x]}, {x, -Pi, Pi, Pi/64}];

intF = Interpolation[data]

Plot[intF[x], {x, -1.5 Pi, 1.5 Pi}]

It would be nice if there were an option to have an InterpolationFunction to either clip its arguments or to return Null. 

It is fairly simple to find a work-around:
clipMyInterpolation[f_, x_] := Module[
  {bounds = Flatten@First[f]},
  f[Clip[x, bounds]]
  ]

clipMyInterpolation[intF, 24]
But, it would be nice to have an option such as:
intF = Interpolation[data,Extrapolation->ClipBounds]
or
intF = Interpolation[data,Extrapolation->Null]
POSTED BY: W. Craig Carter

This has been around, but undocumented:

intF = Interpolation[data, "ExtrapolationHandler" -> {Indeterminate &, "WarningMessage" -> False}]

"ExtrapolationHandler" -> function calls function on any input outside the domain. Whether or not to turn warning messages off (as above) can be decided case by case.

This option can be passed to NDSolve as well.

POSTED BY: Michael Rogers
@Sander I know that it's possible to do it, but it's awfully inconvenient and it takes a lot of manual work.  It's much easier to do it using most other plotting packages.  There are packages that automate much of this manual work, such as SciDraw,  but they have to reimplement a lot of stuf to be able to do this in a general and reliable way (e.g. all of frame and tick drawing) and they still force me to do more manual work, e.g. specify plot ranges manually for every subfigure.

(I can't see your figure because Community's image hosting is broken at this moment.)
POSTED BY: Szabolcs Horvát
There is an add-on program to Matlab called CVX which optimizes convex problems very efficiently.  Python has something similar called CVXOPT.  It would be good to have an equivalent in Mathematica.
POSTED BY: Frank Kampas
Generating plots in grids is indeed a hassle: though it can be achieved by using Inset with the right arguments. Making a figure e.g. 10cm wide is a matter of converting it to printer-points, and setting that as a width (for EPS and PDF output). It might appear small in Mathematica, but the output wil be correct. Stay away from Grid and GraphicsGrid would be my advice once you want to make complicated arrays of figures with specified dimensions/paddings. 
By using inset you make the figures precise, e.g.:


This figure was used in a journal where the overall size was important but also the font-sizes were important.
POSTED BY: Sander Huisman
@Sander nice plot. Can you share your notebook with these alignments using Inset? I normaly use Panel inside Grid.
There is some way to share Notebooks in the forum? Let's test it...
Update:
Do not work using "Add file to this post" buttom.
POSTED BY: Rodrigo Murta
Rodrigo, here is the 'template': I minimized the code, hope you can decypher it:
 z=490;
 h=z;
 z1=z/3;
 z2=2z/3;
 h1=h/3;
 h2=2h/3;
 mleft=40
 mright=6
 mtop=33
mbottom=35
mtoptop=19
multimargins={{mleft,mright},{mbottom,mtop}};
topfigmargins={{mleft,mright},{mbottom,mtoptop}};


size12={z2/2,h1-(mtop-mtoptop)};
size45={z1,h2/2};
sizeb={z2,h2};
size3={z1,h1-(mtop-mtoptop)};
pb=Plot[Sin[x],{x,0,2\[Pi]},Frame->True,AspectRatio->Full,ImagePadding->multimargins,PlotRange->{{0,2\[Pi]},{-1,1}}];
{p1,p2,p3,p4,p5}=Plot[Cos[# x],{x,0,2\[Pi]},Frame->True,AspectRatio->Full,ImagePadding->multimargins,PlotRange->{{0,2\[Pi]},{-1,1}},Epilog->Text[Style[#,16,Red],{\[Pi],0.5}]]&/@Range[5];


multistrucplot=Graphics[{
(*Red,Line[{{z1,0},{z1,h1+h2}}],
Line[{{0,h2},{z1+z2,h2}}],
Line[{{0,h2/2},{z1,h2/2}}],
Line[{{z1+z2/2,h2},{z1+z2/2,h2+h1}}],*)
Inset[pb,{z1,0},ImageScaled[{0,0}],sizeb],
Inset[p5,{0,0},ImageScaled[{0,0}],size45],
Inset[p4,{0,h2/2},ImageScaled[{0,0}],size45],
Inset[p3,{0,h2},ImageScaled[{0,0}],size3],
Inset[p2,{z1,h2},ImageScaled[{0,0}],size12],
Inset[p1,{z1+z2/2,h2},ImageScaled[{0,0}],size12]
},
ImageSize->(size3+sizeb),
PlotRange->{{0,z1+z2},{0,h1+h2-(mtop-mtoptop)}}
];
Show[multistrucplot,PlotRangePadding->Scaled[0.01]]
Should give something like:
POSTED BY: Sander Huisman
Interesting use of Inset. Tks for share it!
POSTED BY: Rodrigo Murta
The key thing is to use:
- ImageScaled[{0,0}] rather than {0,0}. 
- Use margins on the 'original' figures, and no sizes)
- Specify the sizes in the inset command.
- AspectRatio -> Full in the original figures.

But I think we all agree that there should be something 'builtin' like image-assemble: graph-assemble. Or that graphics-grid is fixed ;)
POSTED BY: Sander Huisman
I would like to see graphics handling improved to make it easier to prepare high quality figures.  In particular, I would like to have easy solutions to the following problems:
  • Make it easy to generate a grid of figures which are properly aligned with each other, e.g. like this. Currently this is a huge pain in Mathematica and it's extremely difficult to make the frames line up without a lot of manual work.
  • It should be possible, and relatively easy, to prepare figures to size.  For example, if a column in the journal is 10 cm wide, I want to prepare the figures precisely to 10 cm, while keeping the font sizes consistent (e.g. 10 pt precisely).  This means I can't first make the figure to arbitrary size then rescale it outside of Mathematica because then the fonts will be rescaled too.  This is possible currently if I only use Graphics[] objects.  As soon as I try to make a Grid[] or GraphicsGrid[], or as soon as I use legends (which do not produce Graphics[], again showing the existing limitations!), it becomes very difficult to do it.
I think all these related problems stem from one thing: ImagePadding (and related measures) are unrepditable and not even knowable by the kernel.  They are only known by the front end.  For example, if I want to line up frames in a grid, I need to make sure that all frames have the same ImagePadding.  However, if I set the ImagePadding manually, I need to make sure that it's not too small that it cuts off labels and it's not unnecessarily large.  This involves a lot of manual work and it's difficult to automate because it's not possible to retrieve the automatic ImagePadding without resorting to ugly hacks such as rasterizing the graphic first.
POSTED BY: Szabolcs Horvát
Access to the guts of ListContourPlot and ListContourPlot3D

It would be very nice to have direct access to the meshes created from contour plots of data as in ListContourPlot.  I've extracted the data from the GraphicsComplex that is created by the graphics function, but this seems like a duplication of effort.

Perhaps, such a function could return sets of closed and non-closed surfaces represented by vertex-edge-face data structures.

POSTED BY: W. Craig Carter
@Szabolcs

(great... this is now what I want to use... thanks a lot for spoiling...)

Very interesting package (that I was not aware of...)!

Unfortunately, when putted in perspective with my needs, the price is a "little" out of my league :-(
POSTED BY: Pedro Fonseca
@PedroFonseca

What you describe is similar to symbolic regression.  Symbolic regression means automatically finding a formula that describes the data (by various methods suc as genetic programming, hopefully in a smarter way than brute-force trying a large number of formulae).  I'm aware of one commercial Mathematica package for it (see the link).

Here are two real world applications of the technique, in case anyone is interested (and to show that it's not as crazy as it sounds): (1)(2)
POSTED BY: Szabolcs Horvát
I would like to have a (huge) stack of functions available to the model fit functions, so that, with a simple command, one could search on all this stack for the one that best fitted a given data. And then, to be able to sort and filter then accordingly to a fitting criteria, a function family classification, etc..

Functions would be available for at least 2D and 3D data.

I know many critics would say that if one is fitting data to a function, it is for certainly because the function has a given known theoretical relation to the phenomenon, which means that there should be no reason for a blind search (I've heard this one too many times...). In my business (and a lot of other ones...), this is not necessarily the case in a lot of situations, and we just need a function that describes a given variation so to integrate the variation in a more complex model. This can be done with the Interpolation function, but as soon as things become a little more complex, there's nothing like a mathematical object that we can see, analyze, etc.

This should be relatively easy to start by WR since there's already the Wolfram Functions Site. Also, the functions don't necessarily need to be available on the install, and can be loaded when requested, which means that the framework can start with very little effort, since it can grow along time.

I think that there's no need to say that it could also be a very interesting functionality to have available on W|A (just trying to find a budget ;-) )
POSTED BY: Pedro Fonseca
NearestGradients as a first cousin to Nearest

I wonder if it is possible to include a "Nearest-like" function that returns points that have the largest magnitudes of gradients respect to a set of fixed data points

For example:
distanceFunction[v1:{x_, y_, z_}, v2:{x2_, y2_,z2_}]= EuclidianDistance[v1,v2]
scalarPotential[v1:{x_, y_, z_}, v2:{x2_, y2_,z2_}] := Exp[-distanceFunction[v1,v2]^2]

data = RandomReal[{0, 1}, {10000, 3}];

nearestGrad = NearestDistanceAndGradient[data, ScalarPotential->scalarPotential[#1,#2]&, InterpolationOrder->2]
and nearestGrad[pt,2]  would return a list:
{
{pt1, distanceFunction[pt,pt1], Grad[scalarPotential[pt,p1]]},
{pt2, distanceFunction[pt,pt2], Grad[scalarPotential[pt,p2]]},
}

where pti ={xi,yi,zi} are the points for which Abs[Grad[scalarPotential[pt,pti]] is ordered.

I suppose that this would only work for differentiable DistanceFunctions.
POSTED BY: W. Craig Carter
Fast Modifications or Updates to Nearest 

I wonder if it is possible to modify nearest with minimal computation if only one or two points are changed.

For example, this might be a typical use of Nearest:
data = RandomReal[{0, 1}, {10000, 2}];
nearestFunc = Nearest[data -> Automatic]
location = nearestFunc[{0.5, 0.5}]
EuclideanDistance[{0.5, 0.5}, First[data[[location]]]]

newdata = ReplacePart[data, location -> {0.5, 0.5}];
Now suppose that I only want to change one element of data:Now, I would need to recompute the NearestFunction using all the data.
I would guess that it would be faster to update the NearestFunction with information stored in the previous NearestFunction.
newNearestFunc = Nearest[newdata]
For example, usage of the function might be:
newNearestFunction = ReplaceElementNearestFunction[nearestFunc, {i->{x,y,z},j->{x,y,z}}]
POSTED BY: W. Craig Carter
ISO WEEK YEAR NUMBER & WEEK DAY NUMBER
In DateString and DateList, an old request that I still miss in version 10, very common in financial marketing and another business, is ISO Week Number Date related functions (standard in Oracle and SQLServer).
- Given a date, what is It ISO Week number and iso week year.
Another functions to do the inverse operation would be nice too. For example, Week 04/2014 -> {{2014, 1, 20}, {2014, 1, 26}}
A week day number options ((1, 2, 3, 4, 5, 6,7) for (sun mon tue wed thu fri sat)) would be also welcome. Today we have "DayName", "DayNameShort", "DayNameInitial" but not some "DayNumber", very usefull for calculations.
POSTED BY: Rodrigo Murta

This has now been implemented:

DateString["ISOWeekDate"]
DateString["ISOWeekDay"]
DateString["Week"]
DateString["WeekShort"]
POSTED BY: Sander Huisman
Again, this is not about adding new functions but improving existing functionality.

I would like to see RLink improved!  In particular, I would like to see it (better) support external R installations and support this more seamlessly.  It should be possible to set it up to always use a particular external R installation by default (currently this is rather difficult to achieve), external R should be officially supported on all three platforms and it should support the latest R version (R 3.0).

Why Would a Mathematica User Care about R?

Quoting from this blog entry,
... with RLink I now have immediate access to the work of the R community through the add-on libraries that they have created to extend R into their field. A great zoo of these free libraries fill out thousands of niches–sometimes popular, sometimes obscure–but lots of them. There are over 4,000 right here and more elsewhere.

The only problem is that this was not really so at the release of v9, as on OS X and Linux neither external R installations were officially supported, nor package installation (!!).  And there's not that much in the base R distribution that I'm interested in as a Mathematica user, all the advantages would come from installing third party packages.

The result is that not many people seem to be using RLink despite its huge potential.  Later it turned out that there are ways to get it working on OS X and Linux but it's not in the documentation how to do it and it doesn't always work.

RLink is an excellent system, it has great potential and clearly a huge amount of work went into it.  It is a great pity that it misses that tiny little last step that could make it usable and useful for everyone.  I'm really hoping that WRI is not going to abandon it and the necessary improvements (which are really just the few little things I mentioned above!) will be made for the v10 release.

Every time I recommend IGraphR to someone, their biggest barrier to using it is getting RLink working on their system with an external package first, then the extra setup needed each time before loading IGraphR.
POSTED BY: Szabolcs Horvát

And get RLink to work out of the box under OSX 10.11 (El Capitan) without Szabolcian heroics. http://szhorvat.net/pelican/setting-up-rlink-for-mathematica.html

POSTED BY: Seth Chandler
Matt, I agree that we need a reliable means of cleanup.  I've been wanting this for a while and here's a related thread.  There's a helpful undocumented function called Internal`WithLocalSettings, but it's said to be not 100% reliable.  I use it sometimes.  This undocumented function is an essential element for implementing something like JavaBlock[].

This is also related to my earlier suggestion (above) for a hook into object destruction. There I wrote that the best solution that's currently possible is an analogue of JavaBlock.  But JavaBlock also relies on the undocumented WithLocalSettings.  So I'd say that this functionality is very important for package developers (even if end users would rarely use it).
POSTED BY: Szabolcs Horvát
Matt, there's the new StringTemplate in v10, which, unlike StringForm, actually returns a string, and accepts named palceholders.
POSTED BY: Szabolcs Horvát
@Szabolcs
True, for such things it behaves very differently. But I think for most cases people use With, Block and Module almost interchangeably...
POSTED BY: Sander Huisman
@Matt:

Why not use ToString[ , ] incombination with ScientificForm, NumberForm, EngineeringForm, PaddedForm, AccountingForm, BaseForm ? That should solve most cases right?
POSTED BY: Sander Huisman
Posted 11 years ago
@Sander:

Yes, you can obviously make it work, but the results tend to be verbose and clunky. One of the things I like about Mathematica's regex functionality is that it supports both traditional regex syntax (which for simple cases is compact and clear) and also a more verbose, but abstract Mathematica DSL that expresses the same thing using familiar Mathematica syntax. I'd love to see the same sort of flexibility in string formating.

@Szabolcs:

That looks like a step in the right direction, but misses the easy number formatting of printf. 
POSTED BY: Matt Pillsbury
Posted 11 years ago
I also would like a Printf which uses C-style printf strings for formatting and returns actual strings. StringForm returns wrapped objects, which isn't a big deal, but is much more limited than Printf in key respects (there's no real way to control how numbers are displayed). Right now, you have to do something like use J/Link, which is slow and not terribly pleasant.
POSTED BY: Matt Pillsbury
Posted 11 years ago
I'd like a function that provides reliable clean-up functionality in the face of non-local transfers of control. Something that like finally blocks in Java, or UNWIND-PROTECT in Common Lisp, or like destructors implicitly do in C++. An example would be:
With[{stream = OpenWrite[]},
    Cleanup[
        (* some stuff *),

        Close[stream]]];
Whatever (* some stuff *) might be, including if it has Return[], or Throw[], or if someone uses an Abort[] in the middle of it, I want to be sure that Close is run, to the extent possible within Mathematica's semantics. 
POSTED BY: Matt Pillsbury
What I have wanted for a long time is new functionality for the With command. Using With one can't define a constant in terms of another constant in the same With.  I end up with many levels of embedded With commands.  I would like to see a generalized form of With like this:

With[ {blah1,..}, {blah2,...}, {blah3,...}, expr ] = With[ {blah1,..}, WIth[ {blah2,...}, With[ {blah3,...}, expr ]]]

It should be defined for any number of levels.  A recursive definition: With[ {blah1__} , {blah2__}, expr__ ] := With[ {blah1}, With[ {blah2}, expr]

Clearly, one can avoid such embedding with Block and Module, but it is wasteful to define a variable when a constant will suffice. 
POSTED BY: Terrence Honan
Hi Terrence Honan, meanwhile you can use this implementation by Leonid, that do exactaly what you wants.
POSTED BY: Rodrigo Murta
Is there a big performance-gap between Block / Module and With? I never noticed that this is major. Maybe if you have many many small tasks it will make a difference?!
POSTED BY: Sander Huisman
Sander and Terrence, it's not about performance.  With[] and Module[] serve different purposes.  Consider With[{a=1}, Hold] and Module[{a=1}, Hold].  With[] is a good way to inject something into a held expression.  Suppose you're building a Function object dynamically.
POSTED BY: Szabolcs Horvát

The Inactivate doc page has a neat implementation of IterateWith that has this functionality.

POSTED BY: Sjoerd de Vries

Namely, "Replace With by an iterated version, so that later variables can refer to earlier ones:"

SetAttributes[IterateWith, HoldAll]
IterateWith[expr_] := Activate[Inactivate[expr] //.Inactive[With][{first_, rest__}, body_] :> 
Inactive[With][{first}, Inactive[With][{rest}, body]]]

With[{a = 3, b = a + 2, c = a + b}, {a, b, c}]

{3, 2 + a, a + b}

IterateWith[With[{a = 3, b = a + 2, c = a + b}, {a, b, c}]]

{3, 5, 8}

POSTED BY: Vitaliy Kaurov
I'd like to see something like UnitStep that returns 0 for 0, while being just as fast as UnitStep and not unpacking arrays. (Right now UnitStep[0] == 1.)  Maybe this could be an option to UnitStep.

Motivation:

Vectorization and always using packed arrays is a  good way to speed up computations.  I'm talking about this style of programming: (1)(2).

When programming like this, it's often necessary to transcribe inequalities in terms of UnitStep.  For example, the sum of all numbers greater than or equal to 5 in 'list' can be computed as 
Total[list UnitStep[list - 5]]

This is typically much faster than Total@Select[list, # >= 5 &], depending on the distribution of the numbers in list.  While this is a very simple example, generally, the techinique is useful.  Unfortunately it is also very cumbersome.  Suppose we're looking for the number strictly greater than 5.  What then?  We will need to use 
(1 - UnitStep[5-list])
instead, which is still simple, but just too much of a mental effort to think up without making a mistake.  This is the problem I'd like to see a solution to, or in general, make it more convenient to vectorize calculations.

Of course I do realize that it's trivial to define one's own function for this, or create a package.  It would be nice to have it built-in nevertheless.
POSTED BY: Szabolcs Horvát
Another suggestion is a function to get function information. I have wrote about this in some place in the community that I can't find.
Something like, FunctionData[funcionName_], and it could retrive: Mathematica version of implementation, Attributes (that are sometime protected).
This should be a simple way to work with version compatibility and select all new implemented functions of some version.
POSTED BY: Rodrigo Murta

This now exists:

WolframLanguageData
POSTED BY: Sander Huisman
- official function to pick up, from an existing/generated Graphic, its different internal components, avoiding things like: Plot[...][[1]]
(including a function to illustrate its structure in a kind of graph view of its contents/definition)
POSTED BY: Pedro Fonseca
Pedro Fonseca, very nice requests.
POSTED BY: Rodrigo Murta
This is of minor importance but being able to plot from  0  to  Infinity  (via a  Tan  transformation or something similar) would be occasionally convenient.  For instance, given
Integrate[f[x], {x, a, Infinity}]
copy it and change Integrate to  Plot  just to get a quick visualization:
Plot[f[x], {x, a, Infinity}]

It would be more or less equivalent to
Plot[f[a + Tan[Pi t/2]], {t, 0, 1}]
with  Ticks  automatically set.

Similar it could be extended to plotting from  -Infinity  to  Infinity.  There could be issues with determining how fast to approach  Infinity  (e.g. what value of  x  corresponds to  t = 0.5).
POSTED BY: Michael Rogers
- be able to name and give order to input fields and other interface objects, so to jump between them in the desired order, or to go directly to a given object
- easy way to avoid shift+return or enter to mess up interfaces like input fields (so that I don't need to say " You're pressing it wrong")
POSTED BY: Pedro Fonseca
- raytracing (or better/more automatic integration with pov ray or other)
- bump mapping
- natural management of subscripted variables (already mentioned, but I insist)
- drag and drop dynamic objects (to build interfaces like system modeler)
- persistent data and "interface" tabular object (meaning excel like object)
POSTED BY: Pedro Fonseca
Yes another suggestion, which is not strictly about adding new functins, but about improving existing functionality:

Mathematica's philosophy is that the user should be able to input a simple request and be able to get a result automatically, without needing to either care about how that result is obtained or needing to fuss with details.  This can indeed be quite useful when getting familiar with a functionality area or doing quick calculations.  Many functions, such as NIntegrate, will automatically choose a good method to use internally, and will just work and give a result.

However, sometimes the heuristics used for automatic method selection will intevitably fail, and it will be necessary to better understand the available methods and fine tune them.  Some functions make this possible, and NIntegrate is a very good example: it even provides a framework for extending it with ne methods.

What I would like to suggest is three things:
  1. Make it more transparent what happens when methods are chosen automatically.  Make it possible to find out which method was chosen by the system.  Make all the values of the relevant suboptions accessible.
  2. In general, document Method options better!  Some functions/functionality, such as numerical integration or unconstrained optimization are quite well documented in "advanced tutorials" (though I wish that subsections of these tutorials were more thoroughly references from the doc pages of the actual funcions).  But some other areas of complex functionality, such as graph drawing, are not nearly as well documented.  It is typically difficult to find out what are all the available Method options and suboptions for many different functions.
  3. Provide more fine control!  I would like to see more optional fine control, e.g. for graph drawing, accompanied by more advanced documentation.  Keep the default on Automatic, but let is fine tune the layout methods better.
POSTED BY: Szabolcs Horvát
I Agree with Szabolcs. De Graphs are so important in alot of areas that we should have more stabilty there. Don't want to start a me to thread but just want to give a little support.
POSTED BY: l van Veen
Posted 11 years ago
Two improvements in help messages.

In Dsolve and NDSolve it is not uncommon to see the following misunderstanding
DSolve[y'[x]==3*y,y,x]
where users seem to usually get the notation correct for derivatives, but then use the function without the [ x ], probably because of textbook notation.

If the help system would do a scan for "bare" function names and print out a very explicit help message with exactly the name that needed to be modified and how then this would keep the system from knowing exactly what was wrong, but not explaining it to the user.

Similarly, in NDSolve, and other N* functions, it is not uncommon to see the following misunderstanding
NDSolve[y'[x]==q*y[x],y,{x,1,2}]
where the symbol q has not been assigned a constant numeric value.

If the help system would do a scan for symbols which did not have assigned constant numeric values and print out a very explicit help saying exactly what symbol name needed to be assigned a value then this would again keep the system from knowing exactly what was wrong and printing some unrelated message about some non-numeric value at some meaningless numeric value.

The system should have all the needed information already evaluated to be able to do a quick scan for both of these just before it wades off into trying to find the real solution.

Both these problems are seen again and again.
POSTED BY: Bill Simpson
One improvement I would really like to see is fixing up the Graph-related functions, especially the property API.  It's rather unpredictable and there are lots of serious usability functions, which mostly come up when one would use SetProperty and related functions.  I'm not going to hijack this thread by describing examples, but I'm sure people who've used these functions have encountered them ...
POSTED BY: Szabolcs Horvát
I think that OrderingBy is not necessary.  It it were to be analogous to SortBy then OrderingBy[list, f] would give exactly the same output as Ordering[f /@ list], which can even be changed to
Ordering[f[list]]
for better performance is f[] is one of the optimized listable functions.

(Sorry about the code block, the editor mangles what I wrote otherwise.)
POSTED BY: Szabolcs Horvát
<s>Completely agree with the delaunay triangulation and convex hull; it is VERY slow compared to other packages. Rendering it useless for practical things!</s> huge improvements since version 10.

<s>OrderingBy: Like the entire family: Sort/SortBy, Gather/GatherBy, Split/SplitBy</s> not so useful indeed...

AxesScaling:  An option for ALL the plotting-functions (Plot,ListPlot,ContourPlot, DensityPlot, ParametricPlot et cetera and their 3D variants) that specifies the Axes to be Linear/Log/Date/Custom: AxesScaling -> {"Linear","Log","Log"} would result in a 3D plot with lin-log-log scales. Now you have to do everything manually by renaming ticks and rescaling coordinates (total mess).

<s>Please fix the logarithmic ticks: if it spans many orders of magnitude it will do 10^0,10^2, 10^4 et cetera. That is not too bad; the worst is is that it creates 8 subticks between them! They really have no meaning then!</s> fixed since version 10.

Local unit recognition; over the internet is just too slow. Just put it as a paclet like many components, so it is updatable.
POSTED BY: Sander Huisman

@Vitaliy Kaurov You might want to check why the strikethrough < s > tag is not working in the post above... (I added the tags as an edit later).

POSTED BY: Sander Huisman
Posted 11 years ago
Unit were a welcome addition to mathematica, but a pain to use.
Please add a "Units Palette" as part of the standard distribution.
POSTED BY: David Keith
Parallel as an option to some functions?

After posting this, I remembered that there are the ParallelTry, Paralellize  functions. However, I wonder if some functions might have internal parts that could be paralellized, but for which Parallelize[_] might not work.

I don't know if this is true, but I imagine there are cases in which some functions may be parallelized.  For example,
FindClusters[__, TryParallel->True] 
or
NDSolve`FiniteDifferenceDerivativeADerivative[__,TryParallel->True] 
would launch kernels if possible and speed up computation.

I also wonder if TryGPU might be an option for some functions as well?
POSTED BY: W. Craig Carter
Posted 11 years ago
Make Solve and other similar functions "MatrixForm", and perhaps even "*Form" where it makes sense, aware so that people who cannot resist the compulsion to desktop publish their inputs only to then post asking why they then cannot get useful and correct answers out of Solve and other similar functions. If the function gets a *Form input then just strip off the *Form and see if it can the produce a correct answer. This should be a very small programming task.
 In[1]:= Solve[MatrixForm[{{1, 2}, {3, 4}}].MatrixForm[{x, y}] == MatrixForm[{7, 9}]]
 
 Out[1]= {{\!\(\*TagBox[RowBox[{"(", "", GridBox[{{"1", "2"},{"3", "4"}},GridBoxAlignment->{"Columns" -> {{Center}},
  "ColumnsIndexed" -> {}, "Rows" -> {{Baseline}}, "RowsIndexed" -> {}},GridBoxSpacings->{"Columns"->{Offset[0.27999999999999997`],
  {Offset[0.7]}, Offset[0.27999999999999997`]}, "ColumnsIndexed" -> {}, "Rows" -> {Offset[0.2], {Offset[0.4]}, Offset[0.2]},
  "RowsIndexed" -> {}}], "", ")"}], Function[BoxForm`e$, MatrixForm[BoxForm`e$]]]\).\!\(\*TagBox[RowBox[{"(", "",
  TagBox[GridBox[{{"x"},{"y"}}, GridBoxAlignment->{"Columns" -> {{Center}}, "ColumnsIndexed" -> {}, "Rows" -> {{Baseline}},
  "RowsIndexed" -> {}}, GridBoxSpacings->{"Columns" -> {Offset[0.27999999999999997`], {Offset[0.5599999999999999]},
  Offset[0.27999999999999997`]}, "ColumnsIndexed" -> {}, "Rows" -> {Offset[0.2], {Offset[0.4]}, Offset[0.2]},
"RowsIndexed" -> {}}],Column], "", ")"}], Function[BoxForm`e$, MatrixForm[BoxForm`e$]]]\) -> \!\(\*TagBox[RowBox[{"(", "",
TagBox[GridBox[{{"7"},{"9"}}, GridBoxAlignment->{"Columns" -> {{Center}}, "ColumnsIndexed" -> {}, "Rows" -> {{Baseline}},
"RowsIndexed" -> {}}, GridBoxSpacings->{"Columns" -> {Offset[0.27999999999999997`], {Offset[0.5599999999999999]},
Offset[0.27999999999999997`]}, "ColumnsIndexed" -> {}, "Rows" -> {Offset[0.2], {Offset[0.4]}, Offset[0.2]},
"RowsIndexed" -> {}}],Column], "", ")"}], Function[BoxForm`e$, MatrixForm[BoxForm`e$]]]\)}}

In[2]:= Solve[{{1, 2}, {3, 4}}.{x, y} == {7, 9}]

Out[2]= {{x -> -5, y -> 6}}

There is no need to complicate the task by keeping track of whether *Form has been removed and try to restore that after the calculation, the user can easily add a //*Form to the end of their calculation if they wish. All that is needed is to list every function in Mathematica that would be broken if they are given a *Form and strip that before they begin. This would only try to begin to fill one gap between desktop publishing and the rest of Mathematica.
POSTED BY: Bill Simpson
Sliders with a logarithmic scale.
POSTED BY: S M Blinder
Posted 11 years ago
S M Blinder, I saw this attempt at LogSlider. It only produces one-way behavior though (dragging the slider changes the value, but changing the value doesn't move the slider). Here is a simple start for LogSlider that produces the two-way behavior we'd expect. I'll put it as a stub for "Slider (computing)" on Wikicode tomorrow, so it can be collaboratively expanded.
LogSlider[Dynamic[x_], max_] :=
Slider[Dynamic[Log[max, x], (x = max^#) &]]

{LogSlider[Dynamic@x, 10^6], Dynamic@x}
POSTED BY: Michael Hale
DataArray and DataStore functions as described by Wolfrm back in 2011-2012 please.
POSTED BY: Carl Lemp
I'd like a function analagous to Array that creates subscripted variables rather than indexed variables.
POSTED BY: Frank Kampas
Some very basic integration between CDF Manipulate controls e.g. Buttons and the hosting html page.
POSTED BY: William Stewart
Export of unmeshed BSplineSurface and graphics primitives such as Sphere to a CAD file format such as Rhino's .3dm 
POSTED BY: W. Craig Carter
Posted 11 years ago
Export of 3D primitives and their boolean combinations to STEP and SAT would be nice. These are universal agnostic formats for transfer between CAD environments.
POSTED BY: David Keith
A convex hull function that works in dimensions>= 2 and doesn't always triangulate (e.g., the same functionality as qhull)
POSTED BY: W. Craig Carter
Posted 11 years ago
Improve the support for keyboard input! CurrentValue should be able to read more than just the modifier keys. The current support for only KeyDown events makes it very hard to make keyboard based controls for games. Sharing games with CDF is one of the best ways to get new people interested in Mathematica. I spent the first several years of teaching myself how to program by making two-player keyboard input games. Friends in my high school classes loved playing them. My school didn't offer any programming classes, and when I started an after school club for programming all people wanted to do was learn how to make games.

Mathematica is the only language I've ever used to make a result to show to someone else that doesn't support this functionality. Don't get me wrong. I love that I'll be able to read input from a DIY mass spectrometer in WL, but if you write your code on a keyboard, you need to fully support getting input from it. I don't have any data, but every second of my life tells me that for every student that wants to build a home weather station, we could gain ten users from students sharing games they make with their classmates.

Yesterday, I saw one of my friends from internships and the college programming team was making a game with Elm to practice reactive functional programming. I thought, "Hey, I'm making a Spacewar clone in Mathematica right now. I can probably sell him on Dynamic as opposed to Elm's lift and get him to install the CDF player, but I can't make a natural feeling input scheme for the Spacewar clone!"

I say this needs to be fixed or Elm or whatever the latest open-source startup language with a non-mainstream idea and a small library will eat Mathematica's lunch. I've got a boat to catch. I'll get caught up in a week. Here's the start of the Spacewar code if you want to try to make turning controls that don't jitter.
 t0 = AbsoluteTime[]; dt = 0; {width, height} = {600,
   400}; stars = {White, PointSize@Tiny,
     Point@Transpose@{RandomReal[#, #3],
        RandomReal[#2, #3]}} & @@ {width, height, 30}; ship1 := {White,
    Line[4 {{{0, -3}, {1, -1}, {1, 1}, {0, 3}, {-1,
        1}, {-1, -1}, {0, -3}}, {{1, -1}, {2, -2}, {2, -3}, {1/
        2, -2}}, {{-1, -1}, {-2, -2}, {-2, -3}, {-(1/2), -2}},
      If[CurrentValue[
        "ShiftKey"], {{1/
        2, -2}, {0, -4}, {-(1/2), -2}}, {}]}]}; {position1, velocity1,
   angle1} = {2./3 {width, height}, {0, 0}, 0}; EventHandler[
Dynamic[(dt = # - t0; t0 = #) &@AbsoluteTime[];
  If[CurrentValue["ShiftKey"],
   velocity1 += 20 {-Sin@angle1, Cos@angle1} dt];
  velocity1 += (10^5 Normalize@#/(Norm@#)^2 dt &)[{width, height}/2 -
     position1];
  Graphics[{stars, {White,
     Translate[Line[{#, -#}], {width, height}/2] & /@
      RandomReal[{-10, 10}, {2, 2}]},
    Translate[Rotate[ship1, angle1, {0, 0}],
     position1 = {Mod[#, width], Mod[#2, height]} & @@ (position1 +
         velocity1 dt)]}, PlotRange -> {{0, width}, {0, height}},
   ImageSize -> {width, height},
   Background -> Black]], {"LeftArrowKeyDown" :> (angle1 += 10 dt),
  "RightArrowKeyDown" :> (angle1 -= 10 dt)}]
POSTED BY: Michael Hale
I would like a built-in function which verifies is a degree sequence is graphical, i.e. the same as Combinatorica`GraphicQ without the $RecursionLimit problems that Combinatorica function has.  This functionality must already be present in the kernel in some form for use by DegreeGraphDistribution, but it is not publicly accessible.
POSTED BY: Szabolcs Horvát
This is not a suggestion for a new function, but for some new functionality:  a hook into object destruction.

It's possible to extend Mathematica using C through MathLink or LibraryLink.  Sometimes it's necessary to have some data structure on the C side and make it accessible in Mathematica.  In this case it is necessary to have an explicit function for destroying these data structures on the other side, just like in low level languages that don't have automatic memory management or garbage collection.  This is somewhat inconvenient.

Examples are TetGenDelete (from TetGenLink) and ReleaseJavaObject/ReleaseNETObject (from JLink/NETLink).  Java does have garbage collection but the moment Java objects are made accessible from Mathematica it can't work automatically and it becomes necessary to destroy objects explicitly.

The best idiom I know for handling this situation is JavaBlock.  (One can implement something similar for any library that has its own data structures.)  It's much nicer to use than explicit free functions, but it's still not awfully convenient.

I wish Mathematica has a hook for expression destruction to make it possible for  the free functions to be called automatically.  I'm not sure how this would work, or if it's even feasible, but it would be very useful.  The fact that even the JLink developers decided on using JavaBlock instead of adding this hook into the kernel makes me suspect that it is not at all a trivial matter.
POSTED BY: Szabolcs Horvát
DropWhile is easy to implement as Drop[#, LengthWhile[#, ...]]&.  Generally I'm against having too many basic functions, as Mathematica already has plenty.  But in this case I agree that DropWhile[] would be very nice to have.  Since we already have TakeWhile and LengthWhile, DropWhile would be a very natural addition, and I'm sure many people have looked for in the docs just because they new about the other *While functions.
POSTED BY: Szabolcs Horvát
Posted 11 years ago
- DropWhile
- StringTake with argument that do not generate an error if the string is too short
- StringStartsWithQ

some version of Row and Column that have the same syntax and options, or an tutorial explaining why they don't

 
POSTED BY: Luc Barthelet
Hi Luc,
Maybe StringMatchQ["teste", "t*"] can be used in place of StringStartsWithQ ?
How would this works?
POSTED BY: Rodrigo Murta

StringStartsQ and StringEndsQ are now built-in functions!

POSTED BY: Sander Huisman
Posted 11 years ago
I'd like to see CompilationTarget->"C" with Compile, which means actually CCodeGenerator to be available on ARM
or Raspberry-Pi...however.

I don't see what complications it involves to support the package on this platform, but there might be
a deeper reason why. If someone can enlight me I'd be rather thankful.
POSTED BY: Stefan Schwarz
Functions dedicated to elliptic curve cryptography especially curves of the form y^2 = x^3 + a x + b for several pairs {a,b}

e.g. Objects Secp[256k1] could be equivalent to EllipticCurve[0,7] and represent the locus of points on y^2 = x^3 + 7

e.g. Char[Secp[256k1]] = 2^256 - 2^32 - 977
Then you could do a lot with tables or plots...without really checking my syntax...here's an example

ListPlot[ Flatten[
                Table[Table[  
                                Base58[
Char[  EllipticCurve[i, k] , {k,1,10}] {i,-10,40}] ] ]] ]

• PubKey -> PrivKey complexity order estimations, and other functions

e.g. CrackTime[pubkey, curve, speed...] = 10^897 years (ok, safe key...)

e.g. ValidKey[pubkey, privkey, Secp[256k1]] = True would verify a keypair

• emulate parts of keypair exchanges to examing faults

e.g. SignMessage[<message>, <type...]

e.g. SharedKey[EllipticCurve[0,7], <secret>, <modulus>]

• Hash pre-image time complexity computations

e.g. CrackTime[SHA512, <string>, ...]

• String Entropy
Complexity[<correcthorsebatterystaple>] = ...

• Rainbow tables

• Base58 encoding support and in general greater fluency in integer representations

So, so, so much could be done.  I just rattled this list off the cuff. If there is interest I will put more thought into exactly what would be excellent and write it up formally.
POSTED BY: Allen Majewski
A very simple addition to ExampleData: icon images.

open, close, process, new, add, remove, delete, warning, attention, error, etc
POSTED BY: Pedro Fonseca
Posted 11 years ago
What I really need almost every day and what is missing in Mathematica for many years is built-in user-friendly ticks generation function. The CustomTicks` package is not sufficiently flexible and easy to use. And why we even have no such functions as LinTicks and LogTicks built-in?! Also we still have no way to control the distance between the ticks and tick labels and between tick labels and frame labels (this distances often are too large by default). All this functionality is crucially important for creation of aesthetically pleasing graphics for publication purposes but up to this time a user must spend a lot of time fighting with extremely user-unfriendly Ticks option and finally anyway cannot achieve the desired appearance.
POSTED BY: Alexey Popkov
Posted 11 years ago
I agree with the need for better control of ticks in graphics. But the need goes much further. While our ability to produce good graphics has improved with the recent releases, producing publication quality graphics for scientific journals is still a nightmare. Of course, you can do it, but only by generating an entire page of complicated scripting of graphics primitives.

I would like to see Wolfram take this issue seriously. A good start would be to look at available tools such as Igor Pro 6, SigmaPlot, and Origin -- and commit to producing a competetive capability within Mathematica. These tools also provide data analysis capabilities, but Mathematica already excels at this. All it lacks is the ability to make good presentation graphics.
POSTED BY: David Keith
I agree completely with the prior comment about improving ease of use with Graphics.  I know this isn't a single function (as the thread indicates), but I use Mathematica mainly to write interactive demonstrations and documents for my classes, and I find great difficulty in getting graphics to look the way I want without a surprisingly large amount of work, compared with the relative efficiency of much of the rest of the system.  Also, on more than one occasion, the program has crashed while I have been adding text boxes to graphics, etc.
POSTED BY: Rob Holman
I would like to see a high quality and fast Delaunay triangulation algorithm included as a built-in function.  Constrained Delaunay triangulation should be supported.  Something akin to TetGenLink, but for 2D, would be very nice.

The function from the Computational Geometry package is implemented in Mathematica and is much too slow for many real-world applications.
POSTED BY: Szabolcs Horvát
Posted 11 years ago
A pointer in various, perhaps new, warning messages returned by Solve and relatives to a concise tutorial that explains how to decide what function other than Solve might be more appropriate for the problem that was just given to Solve.
POSTED BY: Bill Simpson
Posted 11 years ago
A pointer in the various error messages returned by DSolve and and similar functions to a tutorial that explains how to provide all the various forms of initial and boundary conditions to obtain satisfactory solutions. 
POSTED BY: Bill Simpson
Posted 11 years ago
Novice mode, possibly even on by default, but easily able to be turned off by those who cannot accept having their likely typos pointed out, which would provide help messages to explain, especially to novice users, how Mathematica's expectations differ from their expectations. This would give clear specific help messages, as opposed to the current messages, which would explain what a novice almost certainly needs to change in their input. This would include help for users who have become convinced they must desktop publish their input to Mathematica.
Highlight the "offending characters" in red, perhaps even blinking red, to direct attention to exactly where something needs to be changed. While most new users don't seem to write long lines, having the attention focussed on one or a few characters would greatly help. I can recall years ago writing some longer and more complicated expressions and staring at them trying to understand exactly where in that line the message I was supposed to focus my attention.
If it could be possible that this be driven by a user readable and possibly even editable, by advanced users, database of rules then this might open up opportunities to experiment with enhancing this. Initially covering really really well the top few dozen most common misunderstandings by new users might help new users scale the learning curve much more rapidly.
POSTED BY: Bill Simpson
Posted 11 years ago
Unify[ notebookfile, notebookfile] which does a Prolog style, but Mathematica aware, unify of two notebooks, showing the minimal changes needed to make the two identical.
Diff[ notebookfile, notebookfile] which does a Unix style, but Mathematica aware, diff of two notebooks, showing the minimal differences between two notebooks.

Both of these would be very useful in trying to expose the changes that have been made over time to a sequence of notebooks.
POSTED BY: Bill Simpson
Rodrigo, what are your comments on MapIndexed versus ScanIndexed - any great advantages there?
POSTED BY: Sam Carrettie
You can do the calc just like this:
Scan[If[#[[1]] > 0.8, Return[#[[2]]]] &, MapIndexed[{#, #2} &, list]]

But for a big list, this is a waste of memory too, to apply MapIndexed to it, just to create a index.
You can of couse use a For with Return, but I don't feel it's a nice WL code approach.
POSTED BY: Rodrigo Murta
I'd like to see ParameticFindMinimum which would enable running the same optimization problem with different starting points over and over again without setting up the problem each time.  There already is ParametricNDSolve, which enables repeated solutions of the same differential equation with different parameters values.
POSTED BY: Frank Kampas
Posted 11 years ago
I would like to see an option DeleteDuplicates for Sort.

I have made my own library that implements a function that does the same as DeleteDuplicates, for sorted lists (as it is, it only works for integers). See this. For the case I tested it on it was about twice as fast as DeleteDuplicates.

But having the option to delete duplicates while sorting would be even nicer and it would make that library much less "needed".
POSTED BY: Jacob Akkerboom
Posted 11 years ago
Hi Jacob,
Union[] will give a sorted list of the unique elements in its argument.
Best, David
POSTED BY: David Keith
Posted 11 years ago
David, thank you so much emoticon.
POSTED BY: Jacob Akkerboom
Posted 11 years ago
I've been working on a list of potential library extensions that would not go in the System namespace as part of Wikicode.
Wikicode to-do list


I was going through implementing stub packages for various articles, but I figured I could get a handle more quickly on larger design issues if I just started making a big to-do list. I'm browsing through articles by how close they are to root categories and how popular they are. I'm looking for data visualizations, models, tables, etc that aren't currently in Wolfram Alpha and that could currently be implemented in a page of code or less. Over time, the sophistication of what can be implemented in a page of code will increase and we can build more complex models.

The code for each will go in the Wikicode package with the same name as the associated Wikipedia article. Packages can dynamically load other packages. Contexts are typically be based on the article title, but can be nested when necessary to prevent clutter. (e.g. "Economy of the United States" data might go under UnitedStates`Economy if UnitedStates gets too cluttered)

I've been asked what an item like "Aluminum: visual attributes as solid" means. For now, it would be convenient just to have named sets of graphics directives for color, transparency, specularity, etc. for common materials. So if I had a model of a soda can I could just put Aluminum`Material[] before it to quickly get something metallic looking. There are plenty of cool opportunities for using parametrized 3D color textures. For example, cutaway views of a simple strawberry model with adjustable size and randomized shape can be supported.
POSTED BY: Michael Hale
A class of functions that I would like to suggest is not a new one, but a extension of Scan function. Here are the function:
    ScanOuter
    ScanThread
    ScanInner
    ScanIndexed
Each one would have the same syntax use of their equivalent (Outer, Thread, ...), but with the same behavior as Scan, with much less memory consumption, due the fact that Scan don't have to "Save" all it results. Other advantage would be to use of Return inside, that opens the possibility to break computation at some moment.As an application example, sometimes I need to apply a function into all combination of to big groups, to take just some specific itens.
Let's simulate the lists using a RandomInteger, and Outer to simulate ScanOuter.
list1 = RandomInteger[1000, {10000, 2}];
list2 = RandomInteger[1000, {10000, 2}];
Reap[Outer[If[N@Norm@{##} > 1700, Sow[{#1, #2}]] &, list1, list2, 1]][[2]]
I believe that this kind of calculation would use much lass memory with the equivalent Scan function, that do not need to store each result.
The calculation above simple don't run in my computer (in less then 5 minutes), and should be almost instantly in ScanOuter.

Another nice application is ScanIndexed.
Imagine you want to know the position of the element, when it's > 0.8 in this list.
list={0.0314503,0.168573,0.291282,0.319608,0.426414,0.430967,0.626601,0.897033,0.923966,0.944242}
pos=ScanIndexed[If[# > 0.8, Return[First@#2]] &, list]
To to things like this today, you have to create a index variable, or use a While loop, what makes me very incomfortable inside WL paradigm.
ScanIndexed could some this in a more elegant way.
POSTED BY: Rodrigo Murta

Dear Rodrigo,

Some of this functionality is (kinda) implemented in the Lazy* functions from the built-in package streaming:

Needs["Streaming`"]

see also the thread here:

http://mathematica.stackexchange.com/questions/85278/is-there-a-built-in-equivalent-to-pythons-enumerate

POSTED BY: Sander Huisman

An important correction is that Streaming` is not a part of official Mathematica functionality, at the moment, but is present in undocumented form. This means that anything there can change in future versions, from function names to semantics etc. Besides, Streaming` doesn't really have the functions mentioned by @Murta, at present - although adding such implementations would be straightforward enough. But even then, they will work somewhat differently, because Streaming` works with data split in chunks, and uses objects which are not usual Lists, although in many ways behave like ones.

POSTED BY: Leonid Shifrin

ScanThread and ScanIndexed are here already.

Needs["GeneralUtilities`"]
?ScanThread
?ScanIndexed
POSTED BY: Alexey Golyshev

ScanThread and ScanIndexed are here already.

> Needs["GeneralUtilities`"] ?ScanThread ?ScanIndexed

But package GeneralUtilities is not listed in Documentation Center page guide/StandardExtraPackages.

In fact, a search indicates that this package exists only in the form of a downloaded Paclet.

POSTED BY: Murray Eisenberg

Yes, the package GeneralUtilities is not listed in the Documentation Center but nevertheless it is a standard package, not downloadable. I think that this package and the package Internal are only for the developers. There are a lot of interesting things in them.

Needs["GeneralUtilities`"]
?GeneralUtilities`*

?Internal`*

From the package Internal I use the function PartitionRagged time to time. It's strange that there is no a similar function in the kernel. Very useful.

Internal`PartitionRagged[Range[6], {1, 2, 3}]
{{1}, {2, 3}, {4, 5, 6}}
POSTED BY: Alexey Golyshev
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract