Group Abstract Group Abstract

Message Boards Message Boards

0
|
3.6K Views
|
8 Replies
|
5 Total Likes
View groups...
Share
Share this post:

Improving accuracy of neural network for determining qubit rotation angle

Posted 1 year ago
POSTED BY: Byron Alexander
8 Replies
Posted 1 year ago
Attachments:
POSTED BY: Sangdon Lee
Posted 1 year ago
POSTED BY: Sangdon Lee
Attachments:
POSTED BY: Byron Alexander
Posted 1 year ago

The syntax looks correct although the training and testing data sets are usually 70% and 30% split, that is, ValidationSet->Scaled[.3]. You can split the input data into a training dataset and a validation dataset also, e.g., ValidationSet->myValidationSet. By doing this way, you can apply the NetMeasurements function to compute various measurements for your validation set.

I noticed that you are changing the one-hot coding as a number, not a "string" and think about whether the "string" would make more sense or not. If string is used, then your problem becomes classification and thus your net has to be modified, especially the last layer.

  • Number? e.g., {1,0,0,0,0.......} to 1, {0,1,0,0,0.......} to 2,
  • String? e.g., {1,0,0,0,0.......} to "1", {0,1,0,0,0.......} to "2",

Adding more hidden layers does not necessarily increase prediction accuracy as demonstrated by the Stephen Wolframs' blog. https://writings.stephenwolfram.com/2024/03/can-ai-solve-science/.

POSTED BY: Sangdon Lee
Posted 1 year ago

@SangdonLee Many thanks for your response, just one query, when I try to evaluate NetMeasurements using the following code:

validationData = trainingData2b;
accuracy = NetMeasurements[trainedNet, validationData, "Accuracy"]
precision = NetMeasurements[trainedNet, validationData, "Precision"]

I obtain the following strange results: 
1.
<|1 -> 1., 2 -> 1., 3 -> 1., 4 -> 1., 5 -> 1., 6 -> 1., 7 -> 1., 
 8 -> 1., 9 -> 1., 10 -> 1., 11 -> 1., 12 -> 1., 13 -> 1., 14 -> 1., 
 15 -> 1., 16 -> 1., 17 -> 1., 18 -> 1., 19 -> 1., 20 -> 1., 21 -> 1.,
  22 -> 1., 23 -> 1., 24 -> 1., 25 -> 1., 26 -> 1., 27 -> 1., 
 28 -> 1., 29 -> 1., 30 -> 1., 31 -> 1., 32 -> 1., 33 -> 1., 34 -> 1.,
  35 -> 1., 36 -> 1., 37 -> 1., 38 -> 1., 39 -> 1., 40 -> 1., 
 41 -> 1., 42 -> 1., 43 -> 1., 44 -> 1., 45 -> 1., 46 -> 1., 47 -> 1.,
  48 -> 1., 49 -> 1., 50 -> 1.|>

Firstly, I don't think the accuracy could be 1. Secondly, I would have expected some real number between 0 and 1 for the precision (instead I get this strange output). Do you have any idea what is going on here?

POSTED BY: Updating Name
Posted 1 year ago
POSTED BY: Sangdon Lee

@SangdonLee Thanks for your response. Having implemented your suggestions I do note an improvement. One query, do you maybe know how to incorporate the ValidationSet[] built-in function into the type of neural network to increase the accuracy and prevent overfitting? I left highlighted in purple my attempt at including the ValidationSet[]. It does run but I don't think it is set in an optimal way. I attach the revised Notebook.

POSTED BY: Byron Alexander
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard