Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions from all groups sorted by activeWolfram Language with Unity Game Engine
https://community.wolfram.com/groups/-/m/t/3089393
Hi Everyone!
I need to synchronize my Mathematica 12.0 with Unity 2021
Following the instructions in this video
[Build Your First Game in Wolfram Language with Unity Game Engine: Live with the R&D Team
][1]
and this manual
https://reference.wolfram.com/language/UnityLink/guide/GettingStarted.html
I've Installed and activated Wolfram Engine 12.2
but have no idea how to get that wolframUnity.package ( that need's to be added to Unity Project )
Also, when i run this command Needs["UnityLink`"]
giving me this error :
![unityLink][2]
i assume that UnityLinks needs to be installed after running it
[1]: https://www.youtube.com/watch?v=Bv8ToQornyo
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=NeedsUnityLink.jpg&userId=2357095Sergey Scorin2023-12-23T10:34:36ZPlotting multiple `FlightData` items on one map
https://community.wolfram.com/groups/-/m/t/3185572
How do I get this:
`FlightData[Entity["Airport", "KMCO"] -> All, "FlightPath", Today]`
onto a _SINGLE_ `GeoGraphics` map?Steven Buehler2024-05-29T20:24:43ZQuantum locking mechanism using quantum phase estimation and phase kickback for ASCII passwords
https://community.wolfram.com/groups/-/m/t/3185554
![quantum locking system circuit diagram][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2341hero.png&userId=20103
[2]: https://www.wolframcloud.com/obj/484a95b3-513e-4e5e-ade7-e83fdc0e0a97Sebastian Rodriguez2024-05-29T18:04:21ZMeasuring 4x4 Reversi: Canonicalization & Impartiality Functions on Multiway Graphs
https://community.wolfram.com/groups/-/m/t/3092013
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/14c707de-aa89-4876-84dc-95263be13c7cAndrea Li2023-12-27T21:54:55ZUnder the sea: nutrition and recipe risk analysis of aquatic foods
https://community.wolfram.com/groups/-/m/t/3173360
![Magnesium levels in sea vegetables vs nuts][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2700Magnesiuminseavegetablesvsnuts.png&userId=20103
[2]: https://www.wolframcloud.com/obj/2dbf0f96-8cf3-47d6-99e9-56b1f266c249Gay Wilson2024-05-09T15:38:51ZMathematica crashes during import of simple SystemModeler model
https://community.wolfram.com/groups/-/m/t/3182972
Hi all,
Currently using trial and playing with functionality. I did some simple battery cell model in SystemModeler:
![enter image description here][1]
in Mathematica I am running:
celMdl = SystemModel["cell"]
in best case Mathematica produces error: SystemModel: cell is not loaded model
Then I realized that I have to open this model in SystemModeler before trying to import it.
But in that case Mathematica just crashes without any prompt.
Related Modelica code:
model cell "Одна батарейка батареї"
Modelica.Electrical.Batteries.BatteryStacksWithSensors.CellRC cellRC(cellData = cellData, useHeatPort = true, SOC0 = initialSOC) annotation(Placement(visible = true, transformation(origin = {-0.038, 40}, extent = {{-55.962, -55.962}, {55.962, 55.962}}, rotation = 0)));
parameter Modelica.Electrical.Batteries.ParameterRecords.TransientData.CellData cellData(rcData = {rcData}, Qnom = nominalChargeCapacity, Ri(nominal = 0.001) = internalResistance, OCVmax = OCVmax, OCVmin = 3.6, useLinearSOCDependency = false, OCV_SOC = OCVSOCtable) annotation(Placement(visible = true, transformation(origin = {-12.5, 80}, extent = {{-10, -10}, {10, 10}}, rotation = 0)));
parameter Modelica.Electrical.Batteries.ParameterRecords.TransientData.RCData rcData(R = equivalentCircuitResistance, C = equivalentCircuitCapacitance) = Modelica.Electrical.Batteries.ParameterRecords.TransientData.RCData(R = 0.0005, C = 0.0004) annotation(Placement(visible = true, transformation(origin = {12.5, 80}, extent = {{-10, -10}, {10, 10}}, rotation = 0)));
Modelica.Electrical.Batteries.Interfaces.CellBus cellBus annotation(Placement(visible = true, transformation(origin = {-33.62, -89.962}, extent = {{-10, -10}, {10, 10}}, rotation = 0), iconTransformation(origin = {-54.067, -51.555}, extent = {{-10, -10}, {10, 10}}, rotation = -360)));
Modelica.Thermal.HeatTransfer.Interfaces.HeatPort_a port_a annotation(Placement(visible = true, transformation(origin = {0, -90}, extent = {{-10, -10}, {10, 10}}, rotation = 0), iconTransformation(origin = {-2.084, -52.596}, extent = {{-10, -10}, {10, 10}}, rotation = 0)));
Modelica.Electrical.Analog.Interfaces.PositivePin pin_p annotation(Placement(visible = true, transformation(origin = {-150, 40}, extent = {{-10, -10}, {10, 10}}, rotation = 0), iconTransformation(origin = {-100, 0}, extent = {{-10, -10}, {10, 10}}, rotation = 0)));
Modelica.Electrical.Analog.Interfaces.NegativePin pin_n annotation(Placement(visible = true, transformation(origin = {150, 40}, extent = {{-10, -10}, {10, 10}}, rotation = 0), iconTransformation(origin = {100, -4.167}, extent = {{-10, -10}, {10, 10}}, rotation = 0)));
Modelica.Thermal.HeatTransfer.Components.HeatCapacitor heatCapacitor(C = heatCapacity, T.start = 293.15) annotation(Placement(visible = true, transformation(origin = {73.93, -18.93}, extent = {{-29.93, -29.93}, {29.93, 29.93}}, rotation = -90)));
parameter Modelica.Units.SI.HeatCapacity heatCapacity = 980 * 0.06 "Питома теплоємність електричної батарейки" annotation(Dialog(tab = "Термофізичні властивості"));
parameter Real initialSOC = 1 "Почтаковий стан заряду батареї";
parameter Modelica.Units.NonSI.ElectricCharge_Ah nominalChargeCapacity = 5 "Ємність батареї";
parameter Modelica.Units.SI.Resistance internalResistance = 0.001 "Внутрішній опір батареї";
parameter Modelica.Units.SI.Voltage OCVmax = 4.2 "Максимальна напруга батареї";
parameter Modelica.Units.SI.Resistance equivalentCircuitResistance = 0.0003;
parameter Modelica.Units.SI.Capacitance equivalentCircuitCapacitance = 0.000001;
parameter Real OCVSOCtable[:, 2] = {{0.0, 0.8571428571428571}, {1.0, 1.0}};
Modelica.Thermal.HeatTransfer.Components.ThermalConductor thermalConductor(G = thermalConductivityCoeff * surfaceArea / radius) annotation(Placement(visible = true, transformation(origin = {0, -52.584}, extent = {{-10, -10}, {10, 10}}, rotation = -90)));
parameter Modelica.Units.SI.Area surfaceArea = 2 * 3.14 * 0.016 * 0.08 "Площа поверхні батареї";
parameter Real radius = 0.009 "Радіус батареї";
parameter Modelica.Units.SI.ThermalConductivity thermalConductivityCoeff = 85 "Коефіцієнт теплопровідності батареї";
equation
connect(cellRC.cellBus, cellBus) annotation(Line(visible = true, origin = {-33.617, -61.565}, points = {{0.002, 56.795}, {0.002, -28.397}, {-0.003, -28.397}}, color = {255, 204, 51}, thickness = 0.5));
connect(cellRC.n, pin_n) annotation(Line(visible = true, origin = {102.962, 40}, points = {{-47.038, 0}, {47.038, 0}}, color = {0, 0, 255}));
connect(cellRC.p, pin_p) annotation(Line(visible = true, origin = {-103, 40}, points = {{47, 0}, {-47, 0}}, color = {0, 0, 255}));
connect(heatCapacitor.port, cellRC.heatPort) annotation(Line(visible = true, origin = {14.641, -17.941}, points = {{29.359, -0.989}, {-14.679, -0.989}, {-14.679, 1.979}}, color = {191, 0, 0}));
connect(thermalConductor.port_a, cellRC.heatPort) annotation(Line(visible = true, origin = {-0.019, -33.429}, points = {{0.019, -9.156}, {0.019, -4.156}, {-0.019, -4.156}, {-0.019, 17.467}}, color = {191, 0, 0}));
connect(port_a, thermalConductor.port_b) annotation(Line(visible = true, origin = {0, -76.292}, points = {{0, -13.708}, {0, 13.708}}, color = {191, 0, 0}));
annotation(uses(Modelica(version = "4.0.0")), experiment(StopTime = 10.0), version = "1", Diagram(coordinateSystem(extent = {{-150, -90}, {150, 90}}, preserveAspectRatio = true, initialScale = 0.1, grid = {5, 5})), Icon(coordinateSystem(extent = {{-100, -100}, {100, 100}}, preserveAspectRatio = true, initialScale = 0.1, grid = {10, 10}), graphics = {Rectangle(visible = true, rotation = -90, lineColor = {128, 0, 0}, fillColor = {55, 142, 255}, pattern = LinePattern.None, fillPattern = FillPattern.Sphere, extent = {{-50, -100}, {50, 100}}, radius = 25), Polygon(visible = true, origin = {-7.143, 14.286}, lineColor = {255, 255, 255}, fillColor = {255, 255, 255}, fillPattern = FillPattern.Solid, points = {{7.143, 33.315}, {7.143, -4.286}, {57.143, -4.286}, {7.143, -57.603}, {7.143, -24.286}, {-42.857, -24.286}})}));
end cell;
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=%D0%97%D0%BD%D1%96%D0%BC%D0%BE%D0%BA%D0%B5%D0%BA%D1%80%D0%B0%D0%BD%D0%B0%D0%B72024-05-2808-36-13.png&userId=1870813Oleg Kmechak2024-05-27T22:43:14ZHedgeHog - AI multi-agent trading system
https://community.wolfram.com/groups/-/m/t/3182230
HedgeHog
========
**AI multi-agent trading system**
Abstract
--------
Hedgehog, an automated trading system based on ensemble of neural network (NN) bots. These trading bots, treated as a population, are continuously optimized by combination of reinforcement learning at individual level and genetic algorithm at population level. In particular, we utilize NEAT algorithm, which allows for automatic optimization over different neural network topologies. The selection of best-performing bots is deployed to trade. Here, we show implementation in Wolfram Mathematica, utilizing parallel computation to accelerate training process. Besides core neural-network based trading system, Hedgehog also provides integration to financial data provider and broker.
Introduction
------------
In recent years, the use of neural network (NN) trading bots in the stock market has gained significant attention due to their potential to improve trading strategies and generate higher profits. These intelligent systems leverage the power of neural networks, a type of artificial intelligence, to analyze vast amounts of historical market data and make predictions about future market trends.
![enter image description here][1]
NEAT
----
In the context of neural network design and training, one typically works with a fixed neural network structure, where only the weights (free parameters) of the operators in the network are optimized in the course of the training process. The NeuroEvolution of Augmenting Topologies (NEAT) algorithm [NEAT][2] , as the name suggests, goes a step beyond this standard paradigm by proposing evolutionary optimization of neural network topologies. NEAT evolves a population of NNs, which are completely characterized by a tuple of {structure, weights}, by breeding between individual NNs which operates on the structure...
Reinforcement learning of NN bot
--------------------------------
Here, we introduce basics of reinforcement learning ... and formulate individual's learning problem: At each tick perform one of the allowed actions with a goal to maximize portfolio value and avoid margin calls...
- target is a single symbol, and considered actions BUY, SELL,
HOLD,CLOSE_SHORT,CLOSE_LONG of given but fixed lot (of share)
- decision to take an action is influenced by data from a set of selected of symbols and the status of portfolio managed by the agent
(NN)
- A subset of best-performing agents is persisted [in DB/file store]
- In new round, new set of agents (NNs) are proposed by evolutionary mechanism (breeding) from _previous_(previous round) and _current_
best performing NNs
- This process is executed continuously over the growing set of historical data
Results
-------
In an effort to verify the usefulness of the trained models, we subjected the models to backtesting on historical data. (see Backtesting section). Test results were aggregated (gold color) and compared with the same number of tests with a senate of random untrained models (blue color).
![backtest results][3]
(x-axis ending profit, y-axis #cases)
Backtesting
-----------
To verify the correctness and robustness of the solution, I chose two types of testing: forward testing and backward testing. In this part, I will describe the backward testing procedure and show the results from the first functional prototypes. The backward testing procedure itself is divided into several phases.
As it is important to test the robustness of the solution, the required test package is divided into three main test instances. During testing, the first instance takes into account historical data from which 150 individual random trading sessions are carried out. In the second phase, historical data is mixed with white noise corresponding to +-5% deviation and 150 individual random trading sessions are implemented. In the third phase, it occurs analogously with only +-10% deviation.
The implementation of the test trading session itself proceeds as follows:
For each time stamp test, the "senate" (collective of n-models trained for a specific symbol) is offered all required inputs, then the aggregated output of all models is evaluated in the form of a vote, and the final decision is made in relation to the simulated environment.
Around 450 such various tests are subsequently evaluated and archived in the cloud in the form of a simple web application.
![backtest results][4]
Forward testing
---------------
The second approach to forward testing required a developed API script enabling communication with the broker application enabling interactive access to the chest. in this third party application we have created a demo account and with the help of the relevant API we will interact with the "senate" of trained models. However, when reading scientific publications dealing with the problematic of traditional signals, they often turned out to be the most effective combination of different methods than just the methods themselves. Breaking this intuition, we decided that it would be more convenient to provide data from the broker (prices of monitored symbols, state of the demo portfolio) to all models independently and to interpret their individual decisions as a choice.
Assuming that the models agree in the vote on some decision (buy, sell, hold, etc.), this decision will be implemented through the API on the broker's side. The performance of the bots is actively monitored and the status is stored in the cloud in the form of an overview application the performance of the bots is actively monitored and the status is stored in the cloud in the form of an overview application.
![enter image description here][5]
![forward testing cloud mnoitor][6]
Implementation
--------------
The central point of our efforts is to find models whose common result of their efforts is to trade the designated symbol as effectively as possible. This patch includes the initialization of parallel compute nodes. The following distribution of training data for each kernel. Assuming that there was already successful learning in the past, it will try to load these models from the database and repopulate the population. If it is an initial learning, the required number of models are randomly created. The required state variables such as number of trading days and initial funds are set. As the last thing before starting learning, the required value of the quality measure is calculated.
Learning itself consists of several phases. In the element, the population of models is divided between individual kernels. Next, each kernel proceeds as follows. Create a neuron network (wolfram Mathematica object) from the model. Create environment by opening clerk device object. Next, it divides the learning data into a set of training and a set of validation data of the required length.
Due to the significant amount of data when trying to train a large number of models over a large number of days and a small time scale, it was necessary to divide the training itself into 2 phases (elementary training, advanced training). After the end of the first phase of learning, the results of validation during learning are evaluated and if they meet the criteria, they advance to the second phase of learning where they are exposed to data up to a 1-minute scale. Subsequently, it will be evaluated whether the validation fulfills the conditions for proceeding to the selection process for the population.
The selection is simple, no more than 10% of the entire population is selected from the given selection of the population. The selection is saved in the cloud. Then it creates all possible pairings (much like Beverly Hills). These pairs will each produce an offspring. In order to achieve sufficient repopulation, these descendants self-replicate with the required number of new random mutations and the necessary number.
A new bar is set for learning progress and the process returns to the beginning with a new population in an endless cycle.
Training pipeline
-----------------
the basis of the whole project is the training pipeline. It includes a comprehensive procedure as in the environment of parallel kernels, it teaches a large population of evectively generated models. Intuitively, it can be divided into the following parts.
- initialization
- dependencies fetching
- training data loading
- parallel training process
- validation and evolution
Under initialization, we can imagine the revival of parallel kernels across networked devices. As much as it was a challenge for me personally to involve wolfram cloud as much as possible in the solution. Therefore, I decided to export the developed libraries (.wls scripts) to the cloud. The training script downloads these scripts from the cloud and distributes them among parallel kernels. After initial attempts, we were shown to use price development data directly downloaded from the broker using the API programmed by us. These data are individually stored on disk in the form of wolfram mathematica TimeSeries objects and exported as 'symbol.m'. These data are individually loaded by each kernel into the operating memory.
Each slightly more complex program requires some state variables, where it is possible to determine starting points for learning, such as the length of the required learning interval, initial capital, total population size, etc. Then, as the last step before starting learning by itself, there is an attempt to load the existing models, and then they repopulate the population of the required length, if successful learning has already taken place in the past, if it is initial learning, a pop-population of random progenitors is generated. For operational reasons, this population is copied to a backup variable, in case stagnation occurs after learning and to save time for pairing (which can be time-consuming in specific cases), this archived population is used. The time has come for the learning process itself.
To the extent that this is an evolution that should not be limited by time in principle. The choice of infinite while seemed intuitively the most correct choice. Constructs like While[True,...] do not belong to the equipment of a good programmer. therefore, a more reasonable approach required controlling this cycle with some external variable that effectively functions as a control mechanism. A cloud object storing a simple string was sufficient for these purposes. Later it turned out that this mechanism can also be used for regularly updating training data, or implementing changes in libraries without the need to interrupt the learning process and make the required changes after following the currently running population. The next generation will take into account required changes in data or code. The last option is to end the process "decently".
Assuming that our control mechanism finds a call to run learning, the following machinery will start.
The entire population is collected and distributed among all kernels that perform the following. A neural network model is created from the specified Graph object as well as the required Loss function. The environment is generated in the form of a DeviceObject with the required initialization parameters. Historical data is divided into a set of training and validation data. The first learning phase will start. After the first phase, the set of 'ValidationMeasurementsLists' for the given model is evaluated and if it meets the minimum requirements, it advances to the next learning phase, otherwise it is discarded. In the second phase, he shuffles the model through learning on smaller time scales in an attempt to teach it to 'perceive' subtle changes in price and on small intervals but on a longer time scale. If the average of the set of validation cycles meets the minimum requirements, as at the end of the first phase, the model is retained and is advanced to the elimination process.
In this process, 10% of the best models from the generation of the previous selection are selected and analyzed to see if there has been stagnation compared to the previous generation. If stagnation has occurred, the possible number of mutations will increase and the stored population from the beginning of the learning process will change with the new required number of mutations. In the case of progress, i.e. in the new selection, the models are more successful than in the previous generation, these models will mate with each other (like Beverly Hills 90210) and create a new generation of proto-siblings. Next, the proto-siblings are copied each with a different chance of mutation to an amount exceeding the required size of the new generation. From this set, a new generation and the required number of individuals are randomly selected.
At the end of each generation, a little maintenance is done. Updating the cloud application in the form of a form with the latest models (in the bonus section we will discuss a bug that I managed to discover in WolframEngine while solving this problem).
Notifying the testing panel in advance to update the model database and continue testing. The new minimum requirement for the next generation will be recalculated based on the results of this one. In the last step, the current command of the control mechanism is downloaded from the cloud and the cycle continues.
1. accumulates historic data for selected symbols
* data are stored in files, updated on a periodic basis [weekly](i.e. cron job)
* sources: capital.com [minute-intervals, offer, spread, swap ]
2. initialization of pipeline
2.1 local and remote kernel configuration
2.2 launching kernels
2.3 loading packages and configuration from cloud
2.4 fetching data
2.5 variables initialization
3. creation and training of agents
3.1 population initialization
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.1.1 initial population generated as simple progenitors of size N
![progenitors][7]
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.1.2 loaded previously trained models
populating new generation from loaded models
3.2 training
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.2.1 initialization of environment
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.2.2 initialization of policy net
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.2.3 data samplings (validation, training)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Full data: Timeseries from (circa since 2019) to Today with resolution of 1min
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Samples: given sample length **T**, take consecutive samples of length T, starting from offset. Half of the samples, at random are to be used in training - the rest is kept for validation
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.2.4 Training - initial stage (240 minute scale)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.2.5 Training - higher resolution
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.2.6 second stage trained model which surpassed minimal backward validation requirement continues for generation selection
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.2.7 select 10% of the best by performance measurement
![training][8]
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.2.8 stagnation verification (if performance of population stagnate incremental counter of possible mutations over generation
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.2.9 saving selection to cloud
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.2.10 crossbreeding
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.2.11 reevaluation of minimal requirement
3.3 training sample
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.3.1 penalties
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.3.2 treats
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.3.3 reward measurement
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.3.4 training & validation data randomization
4. running an agent
&nbsp;&nbsp;&nbsp;4.1 infrastructure initializations (broker API connection)
&nbsp;&nbsp;&nbsp;4.2 portfolio state data fetch
&nbsp;&nbsp;&nbsp;4.3 environment data fetch
&nbsp;&nbsp;&nbsp;4.4 models voting over data
&nbsp;&nbsp;&nbsp;4.5 action performance
&nbsp;&nbsp;&nbsp;4.6 validation data aggregation preservation and presentation
5. Instance of a council
* A set of trained agents (NNs) together with decision rule forms a *council*
![generation overviews][9]
forward testing pipeline
------------------------
in the event that the average of the selection validation is greater than the initial resources, the given selection is saved in the cloud and informs the relevant script performing preliminary testing to update the tested population. In the next part, we will see how this process takes place. The tested population is exposed in real time to live data from the broker through the API, namely monitored model symbols and monitored status of portfolio variables. Next, each member of the population expresses their opinion and the final decision is determined in the form of an election. The decision is made on the broker's side through the API by realizing the purchase of a long or short position on a demo account reserved for testing.
1. initialization
2. loading of models
3. fetching live broker data
4. model execution and voting
5. execution result on broker side
upkeep pipeline
---------------
as the market and prices are constantly evolving, it was essential to ensure the continuity of learning models on current data. We have developed a script whose sole purpose is to update the historical data of the monitored symbols at regular (weekly) intervals, to distribute them over the network to all machines participating in parallel learning. Finally, notify the training script to update the training data before training the next generation.
1. initialization
2. fetching old data
3. via API fetching new data from broker
4. updating data and upload to cloud
5. notifying training script to fetch new data
NEAT
----
In the last part, we will review the way in which the models are represented and organized . The method of pairing selected models and subsequent variations in subsequent generations using mutations
1. creation
the models have two identical representations . To simplify the work of the neural network, it is represented as a Graph . For simplicity, vertices are colored, inputs are blue, outputs are red, and hidden neurons are green . The weights of the edges of the graph represent the weight of individual synapses in the neural network
![progenitors][10]
2. initialization
initialization consists in loading the Graph object of the selected model, its subsequent analysis and translation into a NetGparph object suitable for learning
![net][11]
3. selection
after the end of the learning epoch, the models are ranked based on the success in the validation tests . We will select the top 10 % of the most successful models from the order of selection
![selection][12]
4. breading
Pairing of selected models as a process takes place in several logical phases . The first step is to create all possible pairs from selected models . These pairs produce a background number of offspring . The offspring of individual pairs (siblings) are distinguished by various random mutations guaranteeing the difference of siblings . We will randomly select a new population of the required size from the set of all descendants .
4.1 mutation
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4.1.1 mutation first type
mutation of first type choose randomly between 2 vertices (one can be non existing input) then and new synapse
![mutate1][13]
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4.1.2 mutation second type
mutation of second type choose random edge and intersect it with new hidden vertex (neuron)
![mutate2][14]
4.2 breading
combine two graphs in to resulting offspring by coping edges, preferring dominant specimen if mutual edge is detected
![breading][15]
4.3 populating
selected pairs bread new generation of graphs.
![enter image description here][16]
5. creating reinforce loss
![loss][17]
selected pairs bread new generation of graphs.
6. defect detection and correction
7. save and load
7.1 saving
save trained models in to the cloud object
7.2 loading
load trained models from cloud
environment
-----------
in the last period, the term "gamification of stock markets" is often mentioned in the media and professional community. As much as I myself grew up in a generation where computer games were an integral part of childhood. The perception of the stock market as a game engine where the player tries to maximize his profit was therefore not a problem for me.
We therefore chose the method where the agent was trained in a reinforcement learning environment. (we drew inspiration for the implementation from [reinforce learning][18] ). However, our environment was not a pendulum simulator, but a simple device driver implemented by us called "clerk", whose task is to store and evaluate the statuses of the variable simulated portfolio (p&l margin funds, etc.) lists of executed trades (longs, shorts). On request, simulate the required training sample with the specified model over the required time with specified initial funds, it also controls the system of giving out rewards and punishments.
For other needs, the implementation can also simulate a back test on historical data for the selection of one or more models and evaluate this collective decision.
1.initialization
![environment ][19]
![sample][20]
2.execute testing sample
![enter image description here][21]
3.portfolio management
4.rewards and penalties
4.1 rewards
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4.1.1 at the end of the business day swaps are calculated for longs and shorts, then accounted to overall funds
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4.1.2 allocated profit or loss in opposite longs and shorts is calculated and added to rewards
4.2 penalties
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4.2.1 initialization of environment
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4.2.2 initialization of policy net
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4.2.3 data samplings (validation, training)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4.2.4 data samplings (validation, training)
5. discounted rewards
utilities
---------
just like every project, our project also required a small amount of commonly used utilities. We can divide them into two basic groups. The first one that takes part in the learning process and the second one that takes part in the maintenance.
In the first category, we find functions that ensure the preparation and distribution of data for learning and validation needs, calculate the length of training samples for the needs of the learning process, set the length of learning for individual time scales (training rounds) and, last but not least, the function of generating the training samples themselves using "clerk" device for learning or backward testing.
In the second part we find functions ensuring the running of elementary activities such as parallel loading of training data into all kernels. Updating cloud application database after successful training of the new generation.
1.fetching data
![enter image description here][22]
2.policy net training
3.validation and verification data split
![data split][23]
4.testing data
![testing data][24]
Conclusion
----------
the implementation of this solution, even if it was time-consuming, led to many interesting solutions, which without a doubt enriched the person professionally. We had the opportunity to try out a wonderful tool in the form of WM and implement the entire solution internally, starting with the theory of graphs, neural networks, and ending with a web API, web interface, etc...
If there is someone who is interested in this issue, we would like to start a productive cooperation.
enquiries@aizoo.tech
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-03-12at11.33.49%E2%80%AFPM.png&userId=2527793
[2]: https://doi.org/10.1162/106365602320169811
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WhatsAppImage2024-04-30at22.34.47%281%29.jpeg&userId=2527793
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-04-09at10.38.24%E2%80%AFPM.png&userId=2527793
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-20at2.14.48%E2%80%AFAM.png&userId=2527793
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-03-13at12.05.09%E2%80%AFAM.png&userId=2527793
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-18at9.57.27%E2%80%AFPM.png&userId=2527793
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-18at10.01.20%E2%80%AFPM.png&userId=2527793
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-04-04at12.55.29%E2%80%AFPM.png&userId=2527793
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-18at10.03.13%E2%80%AFPM.png&userId=2527793
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-18at10.03.20%E2%80%AFPM.png&userId=2527793
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-18at10.03.25%E2%80%AFPM.png&userId=2527793
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-18at10.03.32%E2%80%AFPM.png&userId=2527793
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-18at10.06.12%E2%80%AFPM.png&userId=2527793
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-18at10.06.17%E2%80%AFPM.png&userId=2527793
[16]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-18at10.06.25%E2%80%AFPM.png&userId=2527793
[17]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-18at10.06.32%E2%80%AFPM.png&userId=2527793
[18]: https://www.wolfram.com/language/12/neural-network-framework/train-an-agent-in-a-reinforcement-learning-environment.htmlhttps://www.wolfram.com/language/12/neural-network-framework/train-an-agent-in-a-reinforcement-learning-environment.html
[19]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-19at9.56.02%E2%80%AFPM.png&userId=2527793
[20]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-19at9.56.08%E2%80%AFPM.png&userId=2527793
[21]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-19at9.56.20%E2%80%AFPM.png&userId=2527793
[22]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-19at9.59.43%E2%80%AFPM.png&userId=2527793
[23]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-19at9.59.48%E2%80%AFPM.png&userId=2527793
[24]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-19at9.59.51%E2%80%AFPM.png&userId=2527793matus plch2024-05-25T19:10:00ZPoisson disk sampling
https://community.wolfram.com/groups/-/m/t/3182613
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/c425db1e-2113-4c6a-8d19-6d2babf709b3Denis Ivanov2024-05-26T16:03:16ZOptimization challenge: row and column total preserving matrix randomization
https://community.wolfram.com/groups/-/m/t/3183543
Hello! As part of my efforts to develop a public toolbox for doing ecology with the WL, I am looking to optimize an often-used matrix randomization algorithm. Simply put, the end goal is an effectively random binary matrix that preserves the row and column totals of a starting binary matrix. The standard algorithm uses 'quad flips'. A 'quad' is set of four locations in a rectangular arrangement (i.e., the intersections of two rows and two columns). If the elements of the quad are either {{1,0},{0,1}} or {{0,1},{1,0}}, then the elements can be flipped while preserving the row and column totals. Randomizing a matrix thus consists of searching for such quads, and flipping them, many times. Figuring out the number of flips required to asymptotically decouple the final matrix from the starting matrix is complicated, but it rises with the size and density of the matrix and can be in the thousands or tens of thousands for some kinds matrices in ecology. And, as these randomized matrices are typically used to generate a null destruction of some statistic of interest (say, nestedness), the whole process itself may nee to be repeated maybe 1000 times. So, an efficient algorithm is key!
I attach below my first attempt, which does a single flip. For each flip it randomly picks a '1', then picks the other '1' entries in a random sequence and checks whether each forms a flippable quad (the other two entries in the quad are 0). If they do, it makes the flip and returns the new matrix, otherwise it keeps looking until every other 1 has been tested.
matrixSwap[m_] :=
Module[{mCoords, swapCandidate, quadCandidates, n, quadCandidate},
total = Total[m, 2];
mCoords = Position[m, 1];
swapCandidate = RandomSample[mCoords, 1][[1]];
quadCandidates = RandomSample[DeleteCases[mCoords, swapCandidate]];
n = 1;
Until[(quadCandidate = {{m[[swapCandidate[[1]],
swapCandidate[[2]]]],
m[[swapCandidate[[1]],
quadCandidates[[n, 2]]]]}, {m[[quadCandidates[[n, 1]],
swapCandidate[[2]]]],
m[[quadCandidates[[n, 1]],
quadCandidates[[n, 2]]]]}}; (swapCandidate[[1]] =!=
quadCandidates[[n, 1]] &&
swapCandidate[[2]] =!= quadCandidates[[n, 2]] &&
Total[quadCandidate] === {1, 1} &&
Total[quadCandidate, {2}] === {1, 1}) || n == (total - 1)), n++];
ReplacePart[m,
MapThread[#1 -> #2 &, {{{swapCandidate[[1]],
swapCandidate[[2]]}, {swapCandidate[[1]],
quadCandidates[[n, 2]]}, {quadCandidates[[n, 1]],
swapCandidate[[2]]}, {quadCandidates[[n, 1]],
quadCandidates[[n, 2]]}}, Flatten[Reverse[quadCandidate]]}]]
]
Here is a test, using `Nest` to apply it many times.
testMatrix =
RandomBinaryMatrix[100, 100, 5000];
Timing[randomizedMatrix = Nest[matrixSwap, testMatrix, 1000];]
Image[testMatrix, ImageSize -> 150]
Image[randomizedMatrix, ImageSize -> 150]
{Total[testMatrix] === Total[randomizedMatrix],
Total[testMatrix, {2}] === Total[randomizedMatrix, {2}]}
For me it takes about 0.8 seconds to apply 1000* swaps. Some notes on this particular algorithm:
1. My code implements a suggestion I got from Daniel Lichtblau when I asked a related question on stackexchange over 10 years ago! Basically to not generate random quads but start with pairs of ones, and to search iteratively (here using `Until`) for a flippable quad rather than generating all possible pairs. Good advice which I had forgotten about, but this time managed to come up more or less on my own. But, there is surely still room for improvement.
2. If more than half the entries in the matrix are 1, then it is more efficient to switch to testing pairs of zero entries. I will make this simple improvement depending on what the final version looks like.
3. *The function only picks one starting '1' entry. It is possible, especially for a small matrix, that there are no flippable quads for a given starting point. That means that in *n* iterations using Nest, there may be less than *n* actual swaps. This could be solved within the function by, if no flippable quad is found, trying again with a different starting entry. But the problem is likely not a big deal though, because small matrices are fast to randomize, so one could just make *n* quite large in the Nest. And large matrices are slower but almost always have quads.
4. As this is a single flip function, there is some overhead in passing the matrix, and doing some of the initial processing, e.g., the `Position` command, which might be avoided by writing a function that does *n* flips internally,
Anyway, if anyone can see some ways to significantly speed this up, I would be most interested to hear about it. A challenge perhaps? Meanwhile I will continue to see what I can do. Perhaps there is a compiled version...
GarethGareth Russell2024-05-28T15:14:13ZPlot doesn't show function and Nintegrate will not try?
https://community.wolfram.com/groups/-/m/t/3183473
Hi,
I have a function that I am trying to integrate, using Wolfram Alpha Notebook Edition, but using Wolfram language.
See the Notebook:
Why does Evaluate not show the actual value as a real number?
Why does Plot not show the function?
Why does Nintegrate not try to integrate it?
Cheers,
Kari
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/a978313c-aad3-4417-bf4e-4f809fcd0044Kari Karhi2024-05-28T13:24:58ZA generalized function with parameters for both the upper and lower lines of the integrand to use
https://community.wolfram.com/groups/-/m/t/3182345
This is my first Posting. My problem is to find an extreme value of an integral function:![enter image description here][1]
And of course we want to make sure that thisgeneralized function is convergent.Within my ability, the example is given In the attachment.If h itself is small, and the upper and lower limits of the integral are also less affected by h, can we simplify the problem?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1716640020849.png&userId=3182309
Postscript: The problem can be better solved by establishing symbolic variables and then using optimization algorithm. However, the answer method is more concise and efficient under some special solutions, and there is no advantage or disadvantage between the two ideasSen Ma2024-05-25T12:44:55ZImproving accuracy of neural network for determining qubit rotation angle
https://community.wolfram.com/groups/-/m/t/3175788
The physics example problem (to illustrate the use of a basic neural network using Mathematica) I am looking at is a qubit rotated about the y-axis, where the rotation angle is discretized as $\theta_j \in (0, \pi)$. The setup involves the y-rotated qubit measured in the z-basis (hence spin-up and spin-down projector measurements). This scenario involves first analytically determining the measurement outcome probabilities in as a function of the rotation angle $\theta$, then generating measurement outcomes for training, for specific fixed rotation angles $\theta_j$. Then generating another set of test measurement data for some fixed rotation angle $\theta$, we use the neural network to infer the most probable rotation angle. My training data involves generating m = 1000 total measurements for each discrete rotation angle $\theta_j \in [0, \pi]$, then saving the measurement outcomes as tuples of spin-up and spin-down outcomes for each discrete angle. These outcomes are associated with each of the discrete $\theta_j$ values which are one-hot vectors (hence training data of the form {1000,0} -> {1,0,0,0,0...} if for the first rotation angle we get all spin-up outcomes).
The idea is that after training, setting some true rotation angle $\theta$, and generating a new set of test measurement outcomes, the trained neural network should be able to output a probability distribution that shows the most likely discrete rotation angle is the true angle. The code below works but I am having difficulty improving the accuracy without simply increasing the layers and MaxTrainingRounds (this seems to have it's limits in improving accuracy). Can anyone advise on how to improve the accuracy of the code in determining the correct discrete rotation angle (I would like to maintain the general framework of the code)? I am very new to using Mathematica for machine learning applications hence the query. Thanks for any assistance, this is the code in question:
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/8bfdb354-d18c-41e8-8f39-242f20f1c52aByron Alexander2024-05-14T08:14:21ZSolving equation approximately
https://community.wolfram.com/groups/-/m/t/3182263
Look at this equation. Clearly this is not solvable. but a=c=0 reduces the equation which gives two solutions. now I want to find an approximate solution of the original equation which gives the same solution of the reduced equation after substituting a=c=0 in the solution. Is there any way to do that?
Thanks in advance.
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/2ac28004-9ba3-4a0e-9d54-7ae523969098Debojyoti Mondal2024-05-26T08:54:02ZHow can I solve a system of ODE system using RKF45 method with shoting technique?
https://community.wolfram.com/groups/-/m/t/3182489
I tried to solve a system of ordinary differential equations in Mathematica, I got a reults in "ExplicitRungeKutta" method. But I want a results in RKF45 method with shoting technique. How to solve the system?
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/b6b892fc-d421-4d2c-9519-cc63ef738d9bJ Prakash2024-05-26T13:08:46ZHow to solve the equation algebraically?
https://community.wolfram.com/groups/-/m/t/2854383
Hello, everyone! Can you please help me to solve the equation algebraically:
(26 + 18 x)/(2 Sqrt[91 + 26 x + 9 x^2]) + (33 + 64 x)/(
2 Sqrt[70 + 33 x + 32 x^2]) + (39 + 74 x)/(
2 Sqrt[49 + 39 x + 37 x^2]) + (40 + 132 x)/(
2 Sqrt[8 + 40 x + 66 x^2]) + (91 + 190 x)/(
2 Sqrt[90 + 91 x + 95 x^2]) == 0.
I have no idea... Wolfram Mathematica cannot solve it too.Aleksandr Miller2023-03-19T03:01:14ZParade of planets: celestial maps for when & where to see rare planetary alignment early June 2024
https://community.wolfram.com/groups/-/m/t/3181875
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=parade-ezgif.com-optimize.gif&userId=20103
[2]: https://www.wolframcloud.com/obj/1eed9d17-b812-4c48-acbc-3ced45ef1d70Jeffrey Bryant2024-05-24T21:50:07ZPlot the solution of a differential equation
https://community.wolfram.com/groups/-/m/t/3180700
Consider:
$$\frac{1}{(1-e)^2 w}=v-z$$
where $v=f(e,i,L,W,a,n)$, $z=g(e,i,L,W,a,w,n)$, and $L = h(e,w,a,b)$ are given functions.
From the above equation, $e$ can be derived as a function of $w$ along with the other parameters, i.e., $e=e(w;\dots)$. Using the latter, we can consider the following differential equation:
$$\frac{\partial e}{\partial w}=\frac{e}{w}$$
I would like find $e$ and $w$ that satisfies the above differential equation. Obviously, both $e$ and $w$ will come out as a function of $a$ along with the other parameters.
Finally, I would like to create four separate plots for $e$, $w$, $\frac{e}{w}$, and $\frac{ae}{w}$ each of them against $a$ with the parameter values of $n=1$, $W=60$, $i=0.1$, and $b=0.6$.
By closely referring to `Michael E2`'s answer to [this post][1], I tried to come up with the following code:
Clear["Global`*"];
n = 1; W = 60;
L = (w/(b (a e)^b))^(1/(b - 1));
v = -(((-1 + e - i) (-i + (-1 + e + e i) L n) (1/((-1 + e) (-1 + e - i) w) + w/(-1 + e - i) + ((-1 + e) (1 - e L n) (-(1 + i)^2 (-1 + (1 + i)^((1 - e L n)/(L n - e L n)))^2 + a^2 i^2 (-1 + (1 + i)^(1 + 1/(L n - e L n)))^2 W^2))/(a i (1 + i) (1 - e + i) (1 - (1 + i)^((1 - e L n)/(L n - e L n))) (1 - (1 + i)^(1 + 1/(L n - e L n))) (i - (-1 + e + e i) L n) W)))/(i (1 + i + L n + e (-1 + (-2 + e - i) L n))));
z = -((a i (1 + i)^((1 + e)/(1 - e)) (-1 + (1 + i)^((1 - e L n)/(L n - e L n))) (-1 + (1 + i)^(1 + 1/(L n - e L n))) L n W + a (-1 + e) i (1 + i)^((1 + e)/(1 - e)) (-1 + (1 + i)^((1 - e L n)/(L n - e L n))) (-1 + (1 + i)^(1 + 1/(L n - e L n))) L n w^2 W + (-1 + e - i) (1 + i)^(-((2 e)/(-1 + e))) (-1 + e L n) w (1 + 2 i + i^2 - a^2 i^2 W^2 - 2 (1 + i)^(1 + 1/(L n - e L n)) ((1 + i)^(2 + 1/(-1 + e)) - a^2 i^2 W^2) + (1 + i)^(2 + 2/(L n - e L n)) ((1 + i)^((2 e)/(-1 + e)) -a^2 i^2 W^2)))/(a i^2 (1 + i) ((1 + i)^(e/(1 - e)) - (1 + i)^(1/(L n - e L n))) ((1 + i)^(e/(1 - e)) - (1 + i)^((1 + L n)/(L n - e L n))) (1 + i + L n + e (-1 + (-2 + e - i) L n)) w W));
myEQ = 1/((1 - e)^2 w) == v - z;
Block[{n = 1, W = 60, i = 1/10, b = 6/10, a = 1/2}, {#, Dt[#, w]} &@myEQ] /. {e -> e[w]} /. {e'[w] -> e[w]/w} /. {e[w] -> e} // Simplify;
icEQ = % /. Equal -> Subtract // Simplify;
icSOL = NSolve[icEQ == 0 && 0 < e < 1 && 0 < w < 2, {e, w}]
NDSolveValue[{ode = Solve[Block[{n = 1, W = 60, i = 1/10, b = 6/10}, D[{myEQ /. Equal -> Subtract, {e, w} . D[myEQ /. Equal -> Subtract, {{e, w}}]} /. {w -> w[a], e -> e[a]}, a] == 0], {e'[a], w'[a]}] /. Rule -> Equal, e[1/2] == (e /. First@icSOL), w[1/2] == (w /. First@icSOL)}, {e, w}, {a, $MachineEpsilon, 1}]
ListLinePlot[{e\[FivePointedStar], w\[FivePointedStar]}, PlotLegends -> Block[{e\[FivePointedStar], w\[FivePointedStar]}, HoldForm /@ {e\[FivePointedStar], w\[FivePointedStar]}], PlotRange -> All]
The code runs forever. Also, I failed to come up with a code that creates the four separate plots.
[1]: https://mathematica.stackexchange.com/questions/287287/solving-implicit-function-numerically-and-plotting-the-solution-against-a-parameIan P2024-05-23T13:37:08Z[WSG24] Daily Study Group: Getting Started with Mathematica and WL
https://community.wolfram.com/groups/-/m/t/3182021
A Wolfram U daily study group on "Getting Started with Mathematica and the Wolfram Language" begins on July 29, 2024. The study group will run for five days through August 2nd, and each day will run from 11AM to noon CDT.
Join me and a group of fellow learners in a well-paced exploration of some of the fundamental ideas and useful concepts in Mathematica and the Wolfram Language. We'll talk about how you can explore Wolfram Language with ChatGPT, some of the fundamentals of Wolfram Language syntax and the notebook interface, important built-in functions, creating your own functions, visualization techniques, and more!
The idea behind this study group is to rapidly develop a strong foundation for a scientist, engineer, data analyst, or interested hobbyist. As such, no prior Wolfram Language experience or knowledge is necessary.
> [**REGISTER HERE.**][1]
I look forward to seeing you there!
![enter image description here][2]
[1]: https://www.bigmarker.com/series/daily-study-group-getting-started-wl-wsg54/series_details?utm_bmcr_source=Community
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WolframUBanner.jpeg&userId=1711324Arben Kalziqi2024-05-24T23:36:42ZSymbolic analysis of computable functions?
https://community.wolfram.com/groups/-/m/t/3181731
I'm not entirely sure of the appropriate terminology here, so apologies if it's a bit vague.
Does Mathematica support symbolic analysis of computable functions? Can it determine whether any given computable function halts? (OK, just kidding.) For example, can it determine the statistical distribution of outcomes given an assumed distribution of inputs? The theory behind that one is in this paper: http://www.cs.sun.ac.za/~jaco/PAPERS/gdv12.pdf
I guess I'm wondering whether Mathematica can be used as an adjunct to existing static analysis tools that exist for various programming languages. If it could perform the probabilistic symbolic analysis described in the aforementioned paper, I would buy a copy straight away, so I'll make this into a feature request if the answer turns out to be no.
Thanks, cheers.
JeremyJeremy Murphy2024-05-24T22:00:39ZHaving trouble using Mathematica Online on an iPad.
https://community.wolfram.com/groups/-/m/t/782954
I find that Mathematica Online running in a browser window does not accept input from either the IPad Pro virtual, the case cover keyboard or a blue-tooth keyboard. I have installed the Wolfram Cloud app and it sort of works with the virtual keyboard but not consistently with anything else. I am typing this note on my iPad using my case cover keyboard, I am using Safari at the moment but I have found the same results with chrome. Has anyone else seen this problem? Do you have a solution? Thanks.
Rees EvansCelia Evans2016-01-30T02:40:26ZCustom defaults for *some* options, passing through others
https://community.wolfram.com/groups/-/m/t/3181457
Good afternoon! Suppose I want to write a function that calls another, specifying by default certain options in that call, but I also want the user to be able to override those options and also specify any other options available for the called function. I illustrate with a toy example function that does a `ListPlot`, but with the option `Frame->True`.The following *partly* works, in that it sets the Frame option to one that is different from the `ListPlot` default, and yet can be overridden
Clear[mySpecialPlot]
Options[mySpecialPlot] = {Frame -> True};
mySpecialPlot[data_, opts : OptionsPattern[]] := Module[{},
ListPlot[data, Frame -> OptionValue[Frame]]
]
mySpecialPlot[{1, 2, 3, 4, 5}]
mySpecialPlot[{1, 2, 3, 4, 5}, Frame -> False]
However, what it does *not* do is allow the user of `mySpecialPlot` to pass through other `ListPlot` options. So instead, I tried this:
Clear[mySpecialPlot]
Options[mySpecialPlot] = {Frame -> True};
mySpecialPlot[data_, opts : OptionsPattern[]] := Module[{},
ListPlot[data, FilterRules[{opts}, Options[ListPlot]]]
]
mySpecialPlot[{1, 2, 3, 4, 5}, Background -> GrayLevel[0.95]]
mySpecialPlot[{1, 2, 3, 4, 5}, Frame -> True,
Background -> GrayLevel[0.95]]
This *does* do the pass-though but it does not respect the `Options[]` command setting the new default. I have to specify it explicitly
So each code does one of the things I want: how can I combine them?Gareth Russell2024-05-23T18:22:56ZUsing ValidationSet in NetTrain
https://community.wolfram.com/groups/-/m/t/3181618
Hi I am quite new to using Mathematica for machine learning purposes, hence this might be a basic question. In the following code I am using a trained neural network to output the rotation angle of a qubit (after training in for a set of discrete rotations). After training I send the training data through as a test. The output is as expected. The neural network runs, but I am interested in how to use the bulit-in ValidationSet function to avoid overfitting. My NetTrain function is of the following form:
trainedNet = NetTrain[net2, trainingData2, MaxTrainingRounds -> 70000]
Can anyone advise on a basic way to employ the ValidationSet built-in function for my example code. I left highlighted in purple (commented out) my attempt:
trainedNet = NetTrain[net, trainingData2, MaxTrainingRounds -> 70000, ValidationSet -> Scaled[.1]]
but I'm not sure if it is being employed in the correct way. Any advice on how to effectively use ValidationSet is most appreciated. Thanks for your time.
The code in question is as follows:
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/6dffc1a6-a7e1-4e9c-aaf2-e182ea7010d1Byron Alexander2024-05-24T06:38:11ZPlot range prevents placement of graphic elements
https://community.wolfram.com/groups/-/m/t/2815498
The plot range prevents graphics elements from being placed outside the plot range. Example: Sine should only be plotted between -Pi and +Pi. But I want to place an arrow from 2Pi to 3Pi.Bernd Wichmann2023-01-26T14:26:25ZComputational exploration for the ages of programming language creators dataset
https://community.wolfram.com/groups/-/m/t/3180327
![Pareto principle plot of for the number of created (or renamed) programming languages per creator][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=7812Lead2.png&userId=20103
[2]: https://www.wolframcloud.com/obj/b2bc4117-b810-49d4-a58f-c7b53a83f18dAnton Antonov2024-05-21T23:26:08ZWolfram Demonstrations Driven by AI
https://community.wolfram.com/groups/-/m/t/3181386
Hello Everyone. My question is, is there a way to teach an AI to manipulate Wolfram Demonstrations ?
For example i have a open ai account with an api and i want to teach it to manipulate magic square demonstration to just switch one parameter on actual demonstration. May be i need to teach ai on some data set to do so. I don't know! May be there is some easy way to do it ?Sergey Scorin2024-05-23T16:37:26ZTilting beam with the center of mass over an edge
https://community.wolfram.com/groups/-/m/t/3181085
I have a problem which I can't solve properly in Mathematica:
A beam tilts at an edge; initially, it adheres, then it slides. First, the transition from adhering to sliding and then from sliding to the lifting of the edge should be determined. This can be accomplished using the center of mass theorem, angular momentum theorem, and the condition of adhesion. For setting up the differential equation, please refer to the attached document.
The first task to find out the time and angle at which the beam starts to slide is correct in my opinion but I do not get plausible values for the second task, see the attached notebook. .It should be around 0.68 seconds for my = 0.3. Help would be highly appreciated as I slowly start to despair. &[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/b0399329-3046-4828-b032-6a1ce33ce85bFelix Toperczer2024-05-23T15:11:28Z[R&DL] Wolfram R&D LIVE: FeynCalc 10 for algebraic calculations in Quantum Field Theory
https://community.wolfram.com/groups/-/m/t/3181309
*MODERATOR NOTE: This is the notebook used in the livestream "New features in FeynCalc 10" on Wednesday, April 24 -- a part of Wolfram R&D livestream series announced and scheduled here: https://wolfr.am/RDlive. Subscribe to [**@WolframRD**] (https://wolfr.am/1eatWLcDA) on YouTube for more livestreams, exclusive VODs, creator presentations, behind-the-scenes insider videos, and so much more.*
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/c0933631-c195-44d4-8210-9c5ab2975dd6Vladyslav Shtabovenko2024-05-23T14:48:41ZLateral surface of body of rotation
https://community.wolfram.com/groups/-/m/t/3180371
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2024-05-2220_09_36-VolumenRotationsk%C3%B6rper.nb_-WolframMathematica.jpg&userId=1582830
hello,
What is wrong here ? I can't get a numerical value.
jmac2050Jürgen Maczioch2024-05-22T18:14:02ZNumerical Simulation of Damped, Driven Nonlinear Waves with Extended Initial Conditions
https://community.wolfram.com/groups/-/m/t/3180620
I am working on a project that requires creating a specific type of graph, but I am having trouble writing the correct code. The graph should look similar to the one I have attached below. There is also an open question on StackExchange [Numerical Simulation of a Damped, Driven Nonlinear Wave System with Spatially Extended Initial Conditions][1] . Could someone please provide guidance on how to correctly create this type of graph? Any help with the code or tips would be greatly appreciated!
Thank you in advance for your assistance!
&[Wolfram Notebook][2]
![enter image description here][3]
[1]: https://mathematica.stackexchange.com/questions/303365/numerical-simulation-of-a-damped-driven-nonlinear-wave-system-with-spatially-ex
[2]: https://www.wolframcloud.com/obj/6135d419-a7ec-42dd-a1c2-25f1c7855fa0
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=530Upl0H.png&userId=2600850Athanasios Paraskevopoulos2024-05-23T00:24:54ZCan anyone verify these factors of the largest RSA number on Wikipedia?
https://community.wolfram.com/groups/-/m/t/3180824
Are these numbers close that took less than 2 seconds close to the factors?
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/ef5ae021-fac6-4b21-9fa1-f17736c70407Bobby Joe Snyder2024-05-22T20:47:00ZA reformulation of the Browaeys and Chevrot decomposition of elastic maps
https://community.wolfram.com/groups/-/m/t/3180725
![The monoclinic map T of Eq. (57) together with two XISO-approximations B and K to T. The elastic map B is the XISO-approximation whose regular axis coincides with the 2-fold axis k of T.][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6302Lead.png&userId=20103
[2]: https://www.wolframcloud.com/obj/97822251-eebd-4d47-a0bf-c27ffff1f54eCarl Tape2024-05-22T19:22:05ZVariational quantum linear solver
https://community.wolfram.com/groups/-/m/t/3180154
![circuit implementation and symbolic & numeric quantum circuit probabilities using the Variational Quantum Linear Solver algorithm proposed by Bravo-Prieto et al][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1867hero.png&userId=20103
[2]: https://www.wolframcloud.com/obj/ec3c69a5-954f-4bef-98d4-140cc75b6ad4Sebastian Rodriguez2024-05-21T22:09:28ZModeling the maximum potential of rotational grazing with Netlogo
https://community.wolfram.com/groups/-/m/t/1908383
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/dc4d4250-2603-48db-ad3c-ae1f765f95bfJessica Shi2020-03-26T00:47:56ZInserting a matrix in Mathematica Online
https://community.wolfram.com/groups/-/m/t/3177108
How do I insert a matrix in Mathematica Online? The documentation says it should be under the Insert menu, but it is not there in my Online session.
Also, is there a good place that describes the Online interface for someone who is used to the desktop interface?
Thanks.John Gore2024-05-15T23:07:25ZMeasurement based quantum computing: equatorial measurement rules, equivalent circuits and patterns
https://community.wolfram.com/groups/-/m/t/3180183
![generated qubits circuit from a graph using the function GraphToCircuit][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2299Lead.png&userId=20103
[2]: https://www.wolframcloud.com/obj/64e26b3d-28f1-4f59-a8f8-c430de868c7aBruno Tenorio2024-05-22T04:48:11ZHow to calculate and plot the cross-correlation of two signals that have different sizes?
https://community.wolfram.com/groups/-/m/t/3180305
I have two video signals. One encoded in MPEG format and the other in H.263 format. These signals are represented in .txt files with the frame sizes in an array.
I need to calculate the cross-correlation without losing or adding data, even though they have different sizes.
The file with MPEG frames has 89000 frames and H.263 has 17000 frames.
Any suggestions on how to do this in python or matlab?
What I managed to do was use numpy.correlate with the 'full' parameter which calculates the cross-correlation for all possible offsets (lags) of the time series
correlation = np.correlate(mpeg_format['FrameSize'], h_format['FrameSize'], mode='full')André Demori2024-05-21T22:12:53ZUnexpected Limit result
https://community.wolfram.com/groups/-/m/t/3180448
I entered this expression on Mathematica:
Limit[(1 + (i)/n)^(n^2), n -> Infinity]
I got the answer:
Infinity, i > 0
I think the answer is wrong.Moses Ike2024-05-22T03:41:24ZImport trained classifier to Python
https://community.wolfram.com/groups/-/m/t/3176919
Hello Wolfram Community
I have trained a text classifier that understands a product description and puts the product in a category
for example.
"Low Fat milk, 1000ml" -> Dairy
" Strawberry high protein yogurt"-> Dairy
" 1kg of red apples" -> Fruits
and so.
My classifier was built with the Classify function and I saved the model in a . wmlf Whenever I need to classify a bunch of products I just import as
MarkovClassifier =
Import["C:\\Users\\Amor_Rodrigo\\Desktop\\Latest\\Markov Classifier \
834.wmlf"]
and then just classify my list
MarkovClassifier[mylist]
That's it. As easy as that. My problem is that now that I need to implement this function in Python
I am trying something like this
from wolframclient.evaluation import WolframLanguageSession
from wolframclient.language import wlexpr
session=WolframLanguageSession()
classifierpath = r"C:\\Users\\Amor_Rodrigo\\Desktop\\Latest\\Markov Classifier 834.wmlf"
session.evaluate(wlexpr('MarkovClassifier=Import[classifierpath]'))
test= session.evaluate(wlexpr('Map[#^2 &, Range[5]]'))
print(test)
session. Terminate()
My output is this
First argument classifierpath is not a valid file, directory, or URL specification.
First argument classifierpath is not a valid file, directory, or URL specification.
(1, 4, 9, 16, 25)
Does anyone know how to import that classifier function and put it to work. ?
Any help will be really appreciatedRodrigo Amor2024-05-16T04:58:44ZLooking for unbuffered Print[]
https://community.wolfram.com/groups/-/m/t/3179729
Hello Mma users,
I run Mathematica v12.3.1.0 under Windows 10 and I'd like to produce real-time text output.
I have a number of long-running applications and face the following problem: Print[x] only shows at completion or abortion of the running cell. I need to find a way to make the output appear at the time Print[x] is executed. I only need text output, other objects can wait for completion.
Another related question. Some inputs generate a large number of text output, each done via Print[]. Sometimes, the model diverges and start to run astray. At this time, abort (Alt+.) doesn't work and the only way I know to stop the evaluation is to kill the running kernel using Process Explorer or Process Hacker. I know this is rude, and I'd be happy to learn of another, cleaner way to stop a diverging evaluation. Note that if there is no massive Print[] involved, abortion works fine with Alt+.Jean-Christophe Deschamps2024-05-21T09:44:45ZUsing JacobiSymbol vs KroneckerSymbol
https://community.wolfram.com/groups/-/m/t/3179711
Hello all,
from my understanding, both JacobiSymbol(n,m) and KroneckerSymbol(n,m) return the same values when their arguments are integer, even though JacobiSymbol is about 6 X faster (and even though, mathematically, the Jacobi symbol should only be defined for odd positive m).
nn=1000;
a=Table[KroneckerSymbol[ n,m], {n, -nn, nn},{m,-nn,nn}]; // AbsoluteTiming
b=Table[JacobiSymbol[ n,m], {n, -nn, nn},{m,-nn,nn}]; // AbsoluteTiming
a==b (* True *)
So, is it safe to use JacobiSymbol instead of KroneckerSymbol?
Thank you in advance.Paolo Xausa2024-05-21T08:53:52ZQuantum research and education: unveiling Wolfram Mathematica's superiority
https://community.wolfram.com/groups/-/m/t/3178293
![Quantum qubit states and probabilities using Mathematica's quantum framework][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=quantummathematica.png&userId=20103
[2]: https://www.wolframcloud.com/obj/88ca7f05-28c8-4f26-9e76-1efe24135e1aMads Bahrami2024-05-18T00:10:49ZHyperbolic non-Abelian semimetal
https://community.wolfram.com/groups/-/m/t/3180067
![Real- and reciprocal-space structure of the hyperbolic non-Abelian semimetal. Schematic summary of techniques used in this work to access eigenstates of hyperbolic lattices. Hyperbolic non-Abelian semimetal with Dirac mass m=3. Flux insertion in PBC clusters][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=10542Lead.png&userId=20103
[2]: https://www.wolframcloud.com/obj/52cfaf88-b8f7-41aa-bcd1-6d1621f696a1Patrick M. Lenggenhager2024-05-21T19:31:27ZSupport structure tomography using per-pixel signed shadow casting in human manikin 3D printing
https://community.wolfram.com/groups/-/m/t/3180028
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=7228lead.png&userId=20103
[2]: https://www.wolframcloud.com/obj/a170ee16-bfc3-4d02-a9ba-fcde039c7402InHwan Sul2024-05-21T17:49:47ZWhy does nested Integration works for a single function but not for a list of functions ?
https://community.wolfram.com/groups/-/m/t/3175908
I try to apply nested integration to lists of functions. The code works fine for a single function (not a list), but returns error when trying to calculate the same expression with a list of functions.
A function `f[x,args...]` is simplified preliminary by defining argument `x` (let say `X={1,2,3,4}`) and I obtain a list of functions for given `x`'s. Next step, I define integral functions `f2[args...]` and `f3[args...]` and try to compute a nested integration with the list, desiring to obtain a list of result for all `x`'s. May it be done without use of 'Indexed' or calling a part of the function list in general `f[[i]]` ? I would like to send a list into `Nintegrates` and obtain list of outputs.
Currently my solution is the following, but I like to avoid use of indexes:
ListExpressions = {x + y + z, 2 x + y + z, 3 x + y + z, 4 x + y + z, 5 x + y + z};
f[x_?NumericQ, y_?NumericQ, z_?NumericQ, KK_?NumericQ] := Evaluate[Indexed[ListExpressions, KK]]
f2[K1_?NumericQ, y_?NumericQ, z_?NumericQ, KK_?NumericQ] := NIntegrate[f[x, y, z, KK], {x, 0, K1}];
f3[K1_?NumericQ, K2_?NumericQ, K3_?NumericQ, KK_?NumericQ] := NIntegrate[f2[K1, y, z, KK], {y, 0, K2}, {z, 0, K3}];
f3[1,1,10,#]&/@Range[5]
Out= {60.,65.,70.,75.,80.}Aka Kopejkin2024-05-14T11:30:40ZSolving nonlinear equation with integrals and beta functions
https://community.wolfram.com/groups/-/m/t/3179575
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=GINI.jpg&userId=3179559
Hello! I am new to Mathematica. Does anybody here know how to solve (for alpha) this equation? I find it very difficult. I have tried to use NSolve[ ] and Solve[ ]. But it does not work.Muhammad Fahem Bin Musa2024-05-21T02:31:51ZConfiguring Python for ExternalEvaluate
https://community.wolfram.com/groups/-/m/t/3175724
I am following [this guide][1] to try to get Python working in `ExternalEvaluate`. (on Windows 11, Mathematica 13.0). So far, I have successfully installed Python (Step 1), and the Python package manager (Step 2), and the “pyzmq” package for Python (Step 3).
But I get a "Missing Dependencies" error on Step 4:
![enter image description here][2]
From [this post][3], I learned that this may be because the Python library path may not be in Mathematica's default Path. So I tried using `SetEnvironment`, to no avail:
![enter image description here][4]
I then use `RegisterExternalEvaluator`, but still get MissingDependencies:
![enter image description here][5]
And `FindExternalEvaluators["Python"]` still shows MissingDependencies. Not sure what else to try.
[1]: https://reference.wolfram.com/language/workflow/ConfigurePythonForExternalEvaluate.html
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-13missingdependencies.png&userId=167076
[3]: https://community.wolfram.com/groups/-/m/t/1975953
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6197Screenshot2024-05-13setEnvironment.png&userId=167076
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-13register.png&userId=167076Bryan Lettner2024-05-13T23:50:08ZError from FindRoot: The function value is not a list of numbers
https://community.wolfram.com/groups/-/m/t/3138464
I'm trying to calculate a FindRoot or Solve for the intersection of two non-linear equations, any suggestions on how to remedy the errors?
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/dbae5370-049f-470d-b19b-a52348ad9eecJonathan Wooldridge2024-03-10T22:04:50ZDifferent result integrating with integer bounds vs floating point bounds
https://community.wolfram.com/groups/-/m/t/3178350
I found an interesting bug (I think) involving the difference between using integer bounds and floating point bounds when integrating in Mathematica:
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=mathematicabug.PNG&userId=3178314
Why is it that this integral evaluates to positive pi when using integer bounds but -pi when using floating point bounds? What am I missing?Baiza Mand2024-05-17T15:55:24Z$ProcessorCount isn't equal to the real number of CPUs
https://community.wolfram.com/groups/-/m/t/3178956
I'm running Mathematica13.2 on a CentOS7 machine with two AMD Epyc 9754 but $ProcessorCount=12, which is much less.
The shell command "lscpu" and C function sysconf(_SC_NPROCESSORS_ONLN) in <unistd.h> both give that the number of cpus is 256, and I'm sure the program is on the computing node.
I made some checks, such as SetSystemOptions["ParallelOptions" -> "ParallelThreadNumber" -> 256], and $MaxLicenseProcesses gives Infinity. But they don't work.
This problem is complicated, maybe the key is how does Mathematica know the number of cpus. I guess it may be associated with the MKL lib, but I don't know how to test.
Thanks for help and discussion.Shepard WQC2024-05-19T10:37:00ZDynamical Hall responses of disordered superconductors
https://community.wolfram.com/groups/-/m/t/3179263
![Proposed setup for the measurement of the Hall effect. Materials subjected to a magnetic field show circular birefringence, i.e. left and right polarized light waves propagate with different velocities. Below is Hall response of a superconductor for different temperatures.][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=10740Lead.png&userId=20103
[2]: https://www.wolframcloud.com/obj/83cf9702-e3b0-4c31-9e49-35a7a8af2d93Alberto Hijano2024-05-20T17:44:43Z