I am testing sequence prediction. If things happen in a specific sequence, what will the likely next step in the sequence be? Computational power should be included in machine learning algorithms but not even Google's "Prediction API" can solve this simple mathematical exercise. In fact, in my naivety, I reported it as a possible bug and they emailed me:
As far as your example, the Prediction API will not learn that particular pattern because when trying to classify, it will choose from one of the output classes seen in the training set, so without instances of class "10th", it will not even know that such a class exists, does that make sense?
I have used two simple data sets.
Categorization:
catData={
{"1st", "2nd", "3rd", "4th", "5th", "6th", "7th", "8th"},
{"2nd", "3rd", "4th", "5th", "6th", "7th", "8th", "9th"},
{"3rd", "4th", "5th", "6th", "7th", "8th", "9th", "10th"},
{"4th", "5th", "6th", "7th", "8th", "9th", "10th", "1st"},
{"5th", "6th", "7th", "8th", "9th", "10th", "1st", "2nd"},
{"6th", "7th", "8th", "9th", "10th", "1st", "2nd", "3rd"},
{"7th", "8th", "9th", "10th", "1st", "2nd", "3rd", "4th"},
{"8th", "9th", "10th", "1st", "2nd", "3rd", "4th", "5th"},
{"9th", "10th", "1st", "2nd", "3rd", "4th", "5th", "6th"}
}
c=Classify[catData->1];
Classification request:
c[{"1st","2nd","3rd","4th","5th","6th","7th"}]
I have purposely left out a sequence pattern starting with "10th" to see if the model can predict it given the right data sample.
Expected Result:
Output="10th"
Unexpected Result:
Output="1st"
I also attempted this with a regression model.
Regression:
predictData={
{1,2,3,4,5,6,7,8},
{2,3,4,5,6,7,8,9},
{3,4,5,6,7,8,9,10},
{4,5,6,7,8,9,10,1},
{5,6,7,8,9,10,1,2},
{6,7,8,9,10,1,2,3},
{7,8,9,10,1,2,3,4},
{8,9,10,1,2,3,4,5},
{9,10,1,2,3,4,5,6}
}
p=Predict[predictData->1];
Prediction request:
p[{1,2,3,4,5,6,7}]
Expected Result:
Output=10
Unexpected Result:
Output=6
This represents a real world scenario where I am trying to pick the best option to present to a user based on their past behavior. So the next item is unknown because no user has chosen it before and the classification label hasn't been added to the training yet because it's crowd generated dynamically. However, we don't want to just give them any random option. We want to give them something highly relevant during the first interaction with the system.
The whole premise of "prediction" seems to imply the ability to predict a categorical label based on other example labels. In this data set, it should be easy to identify the pattern and auto generate 10th as a category label.
In this scenario, mathematics is all that is needed to determine the next item in a sequence. I guess if we have tons of sequences not all in perfect order but close, machine learning algorithms becomes much more useful.
If there is already a known solution to this issue, that would be amazing to know. Forgive me for not being trained/educated enough to question why this isn't already built into all machine learning algorithms already.
I suppose someone could use transpose to create a model out of the columns in the dataset in addition to the rows. However, in my view, anything related to permutations, shifting, changing, organizing, reordering, etc. of the data to find better predictions should be natively built into the algorithms. Even auto generating new columns/features based on patterns, sequences, math equations, etc. that can be derived from the existing data should be automated. None of these things require human input and are mathematical in nature.
I bring this to the Wolfram community because the WL is uniquely positioned to tap into knowledge and computation in order to create drastically superior machine learning functions. If everything can be symbolic, everything can be computed.