An SEIR like model that fits the coronavirus infection data

Posted 1 year ago
27447 Views
|
54 Replies
|
30 Total Likes
|
54 Replies
Sort By:
Posted 1 year ago
Posted 1 year ago
 Here is a picture with the fatalities per million of Italy, UK, Sweden, and the US; and now also for Brazil and Mexico. For the US, for example, this translates into a forecast of about a total of 146 000 fatalities during the outbreak. We will update this on a daily basis but keep the June 1 forecast. For Brazil, it translates into over a million fatalities in the long run, and the fatality per million forecast will surpass that of every other country. This model will be hard to verify as there is not yet enough data (the data was restored last night). Only time might tell what happens in the end.September 14,20,27: pictures updated. We need to update the models for Brazil and Mexico. Next update in about two weeks.September 5: Brazil's fatality rate is now higher than that of the US, Sweden, or Italy. IHME updated its fatality forecast for the US for the end of the year to something above 400 thousand as the most likely estimate.August 28: Brazil's fatality rate has now caught up with that of the U.S. and is headed higher. Shortly it will surpass Italy and Sweden. IHME now forecasts around 317000 fatalities in the US by December. Unless there is another spike in the number of cases, we doubt that the number of fatalities will be so high. They should start declining soon. We hope to make a new model for the US soon.August 24: The IHME forecast of fatalities for Sweden and Brazil (see below June 14) by 4th August was way to high ... our forecasts are almost exact. In particular, our forecast of the Brazilian fatality rate dates back to early June and it is our most exact model. The number of fatalities is nowhere near what IHME predicted. However, the estimate for the US in October will turn out to be, most likely, low, as there was an upsurge. We have not calculated a new model for the US, but there should be enough data to do that in the near future.August 9: We see clearly a second wave in the making in Spain, and probably elsewhere in Europe. We will compute a new model for these waves when there is enough data. The weekly total in Spain this week is back to levels of 25 April.August 1: There is now growth in all five big EU countries, and quite a bit of it in SpainJuly 27, 28: We continue to observe growth in the big 5 euro countries, especially in Spain, France, and Germany. Italy and the UK seem under control. We have adjusted the model for the fatalities rate for all countries except the US. Our forecast for these countries have changed relatively little. There will be a substantial change for the US when more data is available. The date when Sweden's rate becomes larger than Italy's has been pushed back by three days, from the 12th of August, to the 15th of August.July 18: We observe new growth in the big 5 euro countries, and an uptick in the fatality rate in the US.July 7: IHME forecasts 88000 deaths in Mexico by October. Our forecast is slightly under 80000.July 3: Our forecast for the number of fatalities in the U.S. for the 4th of July is within the margin of error of our calculation (about 3%). It is slightly lower than the actual. We will keep the same forecast for the HGHI forecast for September. On Monday we will provide a more specific number, but we guess it should not be more than 160000 (vs the 200000 of HGHI) - more on this on Monday. We note, however, that there is an uptick in the fatality rate, so it is hard to know exactly what will happen. Hopefully the situation in the U.S. stabilizes and improves soon.July 1: We are adding the weekly totals for the five big Euro countries since May 2. We exclude the current week. Weeks run from Sunday to Saturday.June 26: There is a significant uptick in fatalities in the U.S. Our model might have to be recalculated if it persistsJune 23: This section will be updated in the mornings with the results of the previous day.June 14: IHME (https://covid19.healthdata.org/brazil, and https://covid19.healthdata.org/sweden) forecasts 165590 fatalities for Brazil by August 4th, and 8534 for Sweden. Our model forecasts 6012 for Sweden and 100484 for Brazil. It will be a good check for our modeling. A similar projection for USA extending to October 1 forecasts 169890 fatalities. If the country avoids a second wave, our model indicates that there will not be more than 150000 fatalities. The IHME model for USA can be found in covid19.healthdata.org/united-states-of-america. We will follow this closely, always hoping for the best.June 9: University of Massachusetts, reports the Guardian on June 9, forecasts 130000 fatalities by 4th of July in the US. Our model's forecast is 128550, slightly lower. Our MARGIN of ERROR is +/- 3000. If w Attachments:
Posted 1 year ago
 Dear Vitaliy,I realize it is about two months now and I have not updated my notebook. As soon as I get the time (that is, I go on holiday, in about a couple of months), I will at least publish another notebook which has two things. A different kind of model that I have been discussing here (I call them "TRUE" models, and it is what epidemiologists usually use) which is very good for forecasting once there is enough data, and also, a set of programs to find optimal fits of the models to data automatically. This notebook will contain also the utility program from which you extract the forecasts, to get the pictures I have at the bottom of the post and which I added just a few days ago. I will not publish a notebook with fits to all data sets and gives you all the pictures ... but at least one, so that people can then do their own with the data they might be interested. I will also update the old notebook to contain both SEIR and SIR models of the kind I have been discussing since the beginning, with ONE data set. These, unfortunately, are harder to fit, and I don't have yet a fully, or almost fully, automated way for fitting the parameters; I have a set of "rules" which I apply to get the fits, and they work most of the time, but not always, and they are not fully automatic in the least bit. Apologies for the delays and the limitations. I am up to my ears with work. During lockdown, I have been busier than ever!Regards, Enrique
Posted 1 year ago
 Hello,I will post this soon for the SIR models.
Posted 1 year ago
 Dear Martin,Attached to this reply is a quick and dirty notebook which shows how to make S grow ... the change is in the equation for S'. If you uncomment the commented factor, you can also bring it back down to zero quickly ... sorry I don't have time to make this better looking. Hope this is helpful.Regards, Enrique Attachments:
Posted 1 year ago
 Hi Enrique,Could you possibly let me know how to apply and lift restrictions? Many thanks, Martin
Posted 1 year ago
 In this section we use the optimally fitted "TRUE" models to forecast daily new cases trends. We have models for USA, Italy, Finland and Sweden, and later, Denmark, (not necessarily in this order) to show how lifting restrictions causes a deviation from the forecast. Sweden is an interesting case, as there are no policy changes foreseen in the future. The picture for Italy will be provided when it is ready. The US and Swedish models will not be updated. The Finnish model will be updated last on the 13th of May, when restrictions are lifted. The model for Denmark will be fitted to the 13th of April, when restrictions were lifted.The yellow curve is the actual number of daily cases, the blue curve is the centered 14 day moving average of those numbers, and the green curve is the trend forecast by the "TRUE" models. Recall that in a "TRUE" model, the number of cumulative cases is matched to the R compartment of the model. The logic is that every individual who is found to be infected is effectively isolated and removed from the infection chain, thus becoming part, in reality, of the R compartment. This is different than the models we discussed originally which were meant to model the data in another way using the SEIR/SIR formalism (the compartments are defined differently, and we have argued somewhat vaguely why we think this works ... ). The SIR model considered in these "TRUE" models, are pure SIR models, that is, there is no delay term in the equations. I will come back and write the exact equations here.June 22, 28-29, July 5-6,13, August 2: Updated. July 5,6 updated with weekend or Monday data. Next update in two weeksJune 15: This section will not be updated again until August 17, or then again, only occasionally until then.. Updated todayJune 1,8: updated. For some countries there is an old forecast, in green, and a newer one in red. In the Swedish forecast, the red curve is the smoothened daily fatalities.May 28: There is a new forecast for Finland. In green, the old model, in red, the new model. There is also a new forecast for USA.May 25: updatedMay 18: updated, the green curve is extended to 4 July.May 12: The forecasts of some countries are now extended by a couple of months from May 4. There is now a forecast for Denmark. On the weekend all forecasts will be extended by two months. The Swedish model has, in red, the smooth number of daily fatalities.May 10: earlier today I had posted the wrong file for the US .. it is now the correct one ... it follows the trend very tightly. These graphs will probably be updated on wee Attachments:
Posted 1 year ago
 Look at the equations AND the initial conditions (for S). What you see (relative to the y axis) is what you get ... (I should include a plot with S) ... hope this helps; if not, we can come back to it at some point.
Posted 1 year ago
 I did not understand your comments on population scaling. If one scales S,I,E,R by p, keeping \beta,\sigma,\gamma fixed, p-dependence drops out of the equations(as it should). So, solving for population scaled equations is mathematically equivalent to solving keeping p as a parameter. thanks, hari dass
Posted 1 year ago
 Thank you!
Posted 1 year ago
 Dear Martin,Please give me a couple of (or a few - I am really busy at the moment, fortunately) days to produce a clean version of it. If you have done a little mathematical and physics modeling you will probably understand why I chose to do it the way I do ... I have another version of it that produces a steady state ... etc. I will try to pack it all there, depending on how much time I have. Once you understand how to operate on the basic equations, you can get to model pretty much any effect which depends on susceptibility (lockdowns, lifting lockdowns, more or less strongly, etc.). I will attach the notebook in a reply to your reply directly (there is no space up in the main sections). Alternatively, I might start a new section for this purpose ... when ready, I will let you know.Best, Enrique Enrique
Posted 1 year ago
 Hi Enrique,Could you please post the notebook for your Spain model? I am curious as to how susceptibility changes as regards lifting restrictions.Many thanks, Martin.
Posted 1 year ago
 I would like to add that in China the situation was strictly controlled until the end, so what had to stay constant (susceptibility) did ... I am afraid that elsewhere this will not happen. I already have a model which allows for the growth of susceptibility. It is beyond the scope to discuss it here, but I plan to make a notebook available in the aftermath, when we have the whole picture. So when restrictions are lifted, susceptibility, the parameter that controls the size of the epidemic, grows again, maybe ever so slowly, all depending ... all the measures put together work to determine that one quantity, which is the overall most important quantity.Enrique
Posted 1 year ago
 Robert,Thank you very much!Best, Enrique
Posted 1 year ago
 Enrique,Attached are two notebooks which I am currently using. The fitting methods are embedded in the report procedures. When fitting to cumulative cases, the large numbers in the latest data dominate the fit, but they are probably also more accurate. At the end of the CDCData.nb there is a method for fitting the first differences of the cumulative data to a first difference formula derived from the logistic model, if you also want to see more directly the influence of the early data. We are about to enter the tail phase of the epidemic in the U.S. The model requires an exponential decay in the tail. That didn't happen with South Korea, but the China data did fit. If the U.S. data doesn't follow the tail behavior, that would indicate the model isn't valid.Bob Attachments:
Posted 1 year ago
 Enrique,Attached are two notebooks which I am currently using. The fitting methods are embedded in the report procedures. When fitting to cumulative cases, the large numbers in the latest data dominate the fit, but they are probably also more accurate. At the end of the CDCData.nb there is a method for fitting the first differences of the cumulative data to a first difference formula derived from the logistic model, if you also want to see more directly the influence of the early data. We are about to enter the tail phase of the epidemic in the U.S. The model requires an exponential decay in the tail. That didn't happen with South Korea, but the China data did fit. If the U.S. data doesn't follow the tail behavior, that would indicate the model isn't valid.Bob Attachments:
Posted 1 year ago
 Dear Robert,Would you happen to have a brief notebook which implements your logistic model based on the number of cumulative cases? I would be very grateful to you ... I need this because I have a case where I only have the cumulative cases data. I thank you in advance for your help.Best regards, Enrique
Posted 1 year ago
 Hello,Is it possible to post an animation, and how?Thanks.
Posted 1 year ago
 In this section I discuss an SIR-like model which works almost as well as the SEIR model, at least with the Chinese data. I will have to investigate whether I can make it work as well with other data sets.Our SIR like model is described as followss'(t) = - Beta * s(t) * i(t) / p,i'(t) = Beta * s(t) * i(t) / p - Gamma * i(t - n),r'(t) = Gamma * i(t - n)The function s(t) is the number of susceptible people (the people that can get exposed to the pathogen) at time t; i(t) is the number of people who are infected and infective; r(t) is the number of people who have become resistant to the pathogen: they have recovered and developed immunity or died. Now the parameters.beta is the rate of infection or "force of infection", gamma is the removal rate, and n is a shift parameter used in part to line up the curves (see a bit of an explanation of this in the SEIR section). The parameter values are in the titles of the pictures for each country. In general we assume i(0)=1 unless stated otherwise in the model label. Also, s(0)=p, and r(0)=0, unless otherwise stated.We present in the picture, an SIR model and its parameters that fits the Chinese infection data. With time, I will try to fit the SIR-like model to other data sets. I also plan to investigate the known analytic solutions (although I need to understand how the shift parameter changes the classical solutions) in order to attempt automatic fitting via computational optimization in Mathematica. This is work in progress.August 30: Positivity rates picture updated. For the time being, we will no longer update this pictureAugust 10, 17: Positivity rates picture updated. Mexico, which has a huge positivity rate, aside, Sweden still has a very high one.June 15,22,29: Updated June 29, July 13, 27. The positivity ratios picture will not be updated again until August 17, or then again, only occasionally. All countries in the picture have brought this ratio down over time, except Mexico.June 1,8: Positivity rates updatedMay 21: The positivity rates file will be updated on Mondays from now on. It is updated today.April 21-May 12: The SIR models will now only be updated occasionally. The positivity rates picture will be updated daily.April 19-20: Finland updated. A new SIR model for Finland with a different recovery schedule is included. We include now the current positivity rates for various countries (number of cases/number of tests)April 18. There is now a SIR model for Finland which uses JHU dataApril 14: I have added the SIR model for Italy. I will not update this daily. Later, I will make a notebook available for this model. If I am able to complete it, I will also have in the notebook a program to find the parameters to fit the model to the data - but I cannot promise I will have that, at least not soon. Note the forecast is somewhat more optimistic, both in regard to total susceptibility and duration of o Attachments:
Posted 1 year ago
 This post has been listed in the main resource-hub COVID-19 thread: https://wolfr.am/coronavirus in the section Computational Publications. Please feel free to add your own comment on that discussion pointing to this post ( https://community.wolfram.com/groups/-/m/t/1888335 ) so many more interested readers will become aware of your excellent work. Thank you for your effort!
Posted 1 year ago
 In this section I attach the notebook. I will update it soon and add new features to it, such as the effect of lifting restrictions early.August 30, September 6: the pdf document has been updated. Next update at the end of SeptemberJune 29, July 5, 13,19, 27, August 2, 10, 17: the pdf is updated. Next update in a weekThis notebook is also posted in a reply to Kaurov, above the Scandinavian countries section. I will post (hopefully soon, I did not get it ready June 22) the new notebook for adjusting parameters automatically in both sections. In the reply to Kaurov, there is also a pdf document with daily new cases curves for several countries, now also attached here. This will not be updated until August 17, or then again, maybe only occasionally. It is interesting to see that some countries which brought their numbers down initially have settled in a plateau of a small number of daily cases, without being able to completely eliminate the v Attachments:
Posted 1 year ago
 You are right of course in what you say, but there is a caveat ... in using computational methods, I have had to "intervene" and fit to just part of the data, rather than the whole set, when the data was "messy" and behaved unpredictably ... I have since developed a more or less systematic way of doing these interventions with the aid of the additional parameter in the model (note it is not exactly an SEIR model). My line of thinking is that computational tools aided with further AI computational tools that emulate the actions of the interventions would be the ideal way to go about this. If I ever get time to do it, I will try to build such a tool. But for now, my approximate methods, the use of a more complicated model to get first approximations, etc. will have to do. Perhaps later we can discuss how one might do the fits with established methods in Mathematica ... if you are up to doing something like that. There are a couple of us now working together on this ...
Posted 1 year ago
 I'm afraid I also lacked that foresight and only have a little saved (and most of that's just for the USA) as it was a while before I realized the significance that the changing variables had. I've begun a more thorough archive NOW though and I expect to keep it up for the future! I'll be happy to hand that archive to you whenever you like.It strikes me that the hand-adjustment of the variables that you've been doing is analogous to the best-fit line trick of laying a ruler through a scatterplot of data with a linear trend and eyeballing and equalizing dots above/below the line. It is quite effective for datasets with a clear linear trend and likewise, your methods are self-evidently quite effective for these SEIR-like trend datasets.A much more computationally intensive method would be to apply the appropriate Euler's method (or Runge-Kutta, I think 4th order? method) to the SEIR equations for the dataset at some early time per each group reporting data (world/country/province) with enough volume and shape to be fit to, and re-do that best fit for each day data is reported. The changes required of the variables to maintain that best-fit would then be tied to a more mathematically rigorous definition of "best" fit as well. Doing this and comparing the changes of variables over time per group would also seem likely to help determine correlations between generalizations of how the epidemic progresses as well as comparisons of each group's susceptibilities and methods of combating the virus.
Posted 1 year ago
 Dear Zach,Thank you very much ... indeed, my intent was to be "utilitarian" in the sense you point out, and I think I have more or less succeeded ... But let me tell you, you have outdone me now, and I would almost like to ask you for help. If you have kept the daily slides, then please keep saving them ... and later, perhaps you might make them available to me as a time sequence ... I ask because I have not kept track of the parameters as time goes by, (lack of foresight on my part) and indeed, that would be very useful information for the sequel of this work which has to do with the methodology for fitting the parameters. As you mention it, having the history of those fits would go a long way towards completing that work; but I have not been keeping track of that history (because basically, my methods work well for the time being, and lack of foresight combined with lack of time to make a tool or keep track of all of them on a daily basis). With all that said, I hope I we can stay in touch and correspond about this matter later on. Once again, thank you for your good words, and I will try to keep making this as useful as I can to the community at large. If I only had the resources, I would do this for all countries. But for now, this will have to do.Best regards, Enrique Garcia Moreno E.
Posted 1 year ago
 Doctor Moreno, I've been following this discussion, which I found through Robert Nassar's, for nearly a month and I've been consistently impressed by not only your research and models (which are the best fits to the data that I've seen ANYWHERE so far) but also by the explanation and discussion of the work. I actually made a Wolfram account primarily so that I could post this message to let you know how much this kind of work (and education about the underpinnings of the work) is appreciated. I look forward to further exploration of Wolfram's community in hopes of finding similar quality discussions of subjects more closely related to my field, as well as to seeing the full body of this work in publication. Many thanks!As I understand it, you're plotting a standard SEIR curve and doing a best fit to the reported data (as reported by various health organizations)- therefore looking ahead and using this model predictively the future curve is an indication not necessarily of how many are actually infected but of how many will be reported as such on each coming day. To me this is very interesting because it sidesteps many of the underlying issues of tainted data due to under testing, selective testing, flaws in tests, etc. and just rolls them up into the variables of the model. Picking those apart after the fact is very interesting, but in the moment, given the data set(s) available, this seems like perhaps the most useful utilitarian presentation of a model.It would be very interesting to me to see as a 3D plot, or even as an animation (use time to show another variable?) how your model has been adjusted as time and the epidemic has progressed. I've done that crudely for myself just by flipping through the images I've saved for reference of the plots you've presented and updated daily watching the curves change as new data points appeared and I think a representation of that with the variable values adjusting as the model adjusts would be revealing.I think of most things as plots of curves or surfaces first, equations driving their shapes second so I tend to be biased toward a graphical representation. Therefore, I'm highly interested in a plot of the underlying variables used for each best fit SEIR curve day by day as the model was adjusted to fit each new data point. Having a near-constant value the whole way through would be the classic case, where the model is an excellent fit to the phenomenon all the way through. In this case though, I believe that one of the big differences is in your use of a much smaller S initially than the total population of each region reporting data (as one example variable). The explanation that the entire population isn't actually susceptible early on (for instance, due to social distancing/quarantining attempts) rings true to me (I think back to chemical rate or mixing equations from Calc II and remember that those always assumed "well mixed" tanks of solutions, which is NOT always the case in real life!). Therefore, a plot of how the S needed to change with time within each country to keep a best-fit would likely be very revealing of how the epidemic surged (or didn't) through the population and changes in it's slope might correlate with changes in testing, reporting of data, treatment methods, institution of distancing measures, etc.A deep dive into the other variables' change over time likely would be equally revealing. I'm too new to Wolfram software to know if this "historical" data for earlier fits is still included in this notebook, but it would of interest!
Posted 1 year ago
 Hi,This is not possible at the moment, it is incomplete work. At the end of the day, I do quite a bit of fine tuning based on my by now fairly good understanding of the behavior of the parameters. You also need to make judgement calls regarding the quality of the data, and on what part of the data stream you want a better fit. For example, in the China model I fit to the tail first (once it was there), and then adjust to fit the front part of the data stream. This is, at the moment, an interactive process. I hope to have useful tools to do this later on more or less automatically. But that is quite a bit of development work.
Posted 1 year ago
 Hi, can you post the code you use about how the fitting is done to obtain beta,sigma,gamma,Ishift,Eshift, for example for some random data of {day , infected, resistant} ?Thanks.
Posted 1 year ago
Posted 1 year ago
 Dear Enrique, the data are here, for the regions update every day at 18:00https://github.com/pcm-dpc/COVID-19/blob/master/dati-regioni/dpc-covid19-ita-regioni.csvThe regions are all there : the most hit, as you know are Lombardia, Emilia, Veneto, Piemonte, but the disease is spreading allover the country. I would have many questions to ask you. 1: what the parameters (beta, sigma, gamma, p, Es, Is) ? 2: apparently is no lockdown effect/date in your model ? 3: I am puzzled by the Wuhan case: how did you optimized the parameters ? the early data are clearly filled with artefacts. Did you fit/optimized using the later date ? The model is robust enough to find its way through the overall behaviour, unsensitive to "strange" data ? 4: would it be possible to split fatal from recovered ? 5: could we have some discussion through private email or phone ? if so my email is roberto.battiston@unitn.it please reply there. thanks again for your very interesting work , it is exactly what we need.Roberto
Posted 1 year ago
 Dear Roberto,I would be glad to try to come up with a model for the Northern regions of Italy, if I can find the data. Alternatively you can tell me where to find it, that is probably a better solution, or you can send them to me (I can tell you how, or you can post them as Mathematica lists of numbers in your reply). I can curate it by hand if necessary. Please bear in mind that it is difficult to really know how the parameters are going to end up in the end until the the I curve has peaked. Although I have learned to make educated guesses for this disease now. So I think I might be able to get something that is close to the reality. Glad to help if I can. Personally, I don't think the outbreak in Italy is going to get much worse than is implied already in the posted model, in terms of total number of cases (susceptibility). Best, Enrique
Posted 1 year ago
 Enrique, I am writing from Italy. I am a physicis. Today I was trying to see how to adapt SEIR to the OBSERVED value and I found your work ! Impressive. Congrats. You know how bat the situation is in Italy. Your work would be extremely useful. I would like to suggest to additionally test your work on a couple or regions in Italy: Lombardia, Emilia Romagna, Veneto and Piemonte. In this way you could spot if there are or not differences in the fitting parameters. Would it be possible at all ? Please reply me asap: time is of essence. With my best regards Roberto Battiston
Posted 1 year ago
 I have updated the notebook again, there was a mistake in the notebook I posted before.August 30, September 6: pdf document updated. Next update at the end of September.June 15,22, 28-29, July 5-6,13, 19, 27, August 2,10,17: PDF documented updated with weekend data - no country seems to be succeeding in eliminating the virus completely. NEXT UPDATE in a week. The June 22-promised notebook for fitting parameters automatically is not yet ready, I will try to get it ready this week, but I can't promise, except it will be ready some time. --- The pdf document will now be updated only on weekends. It is interesting to note how many countries which have brought the number of new cases down have settled on a small constant number of daily cases below which the numbers are not dropping, for example, Austria and the Czech Republic. Some countries have been unable to bring down their numbers at all, such as Poland. Some countries have just peaked (US, Russia), and some countries are growing (Mx, Brazil).June 1,8: Updated. In this section soon there will be two new notebooks, pertaining to the automatic determination of parameters, and to making susceptibility grow.May 20-21: The pdf with smooth daily tallies now includes tallies for Russia and Brazil, in addition to all the previous 20 countries. This file will be updated on Mondays onlyMay 12: The pdf with smooth daily tallies has moved here. Soon I will post a notebook with the Attachments:
Posted 1 year ago
Posted 1 year ago
Posted 1 year ago
 Hi,I have updated my notebook (March 20) with the China model and data. I will add the other models when they stabilize.
Posted 1 year ago
 Well, it seems our forecasts were quite on the dot ... there were no new infections yesterday, March 19, aside from imported cases. I am now trying to extend the models to other countries and get a feeling for how long this will last. It seems to me that the dynamics of this disease are pretty much the same everywhere, except for the volume (effective susceptibility). It seems that, from the moment that the number of cases starts climbing to there being almost no cases is at most about four months. Let's hope this is the case. The important thing is to then guard against other waves. I hope the whole world keeps its guard up.
Posted 1 year ago
 Hello,The published notebook is not my working notebook. I am on travel for a week. I hope to get to updating the published notebook then, apologies, and thank you.
Posted 1 year ago
 Dear @Enrique, is the notebook you attached to your head post the latest one? I see many images in your post that are not in that notebook. It would be great to see your most complete recent notebook for all things that you demonstrated. Thank you for sharing, this is very nice!
Posted 1 year ago
 Attached, I tried embedding it and even though I am signed into the cloud, I can't do it.
Posted 1 year ago
 Thanks!
Posted 1 year ago
 I made a notebook deriving the logistic model from your SEIR equations with a simple assumption.
Posted 1 year ago
 Right, I apologize I haven't done it, it has taken me a while to get this right. I will try to get it done by tomorrow, or else, I will be on travel and it would have to wait a week. The trouble is I am using approximate data or data straight out of websites, rather than the github data, which I don't find I can computer read. But an approximation to the nearest hundred works fine (as in the JSU CSSE website itself). I will explain in the notebook with more detail other aspects pertaining to the data, such as a correction for the counting methods change. I am now modeling the Italian outbreak but it is to early to settle on any parameters ... but I think, and I want to believe, that it will be controlled ... apologies again and thank you for your patience.
Posted 1 year ago
 Dear @Enrique Garcia Moreno E. thank you for sharing! Could you please share the notebook with Wolfram Language code? You can attach it or embedded into the post.
Posted 1 year ago
 Excellent to get this kind of confirmation ... later tonight, EET, I will post an update with the equations and parameters of the model, which changed a bit last night, for the last time I hope. I can now really calculate R0 classically, and it is different, unfortunately higher ... Thanks for sharing this.
Posted 1 year ago
 If I am understanding your graph correctly it looks as if it is predicting the end of the epidemic at about the same time as a simpler logistic model based only on cumulative cases. In the graphs below day number 1 is the first data day on the JHU database, 1/22/2020. Also the curve looks like it is becoming more symmetric like the logistic growth model.
Posted 1 year ago
 Thank you, your points are well taken and I will do so in the next round ... there are already some updates and some changes and some things that I still need to clarify (or else I am creating confusion) ... but I think I will wait a few days and see how close future data is to the model without having modified it further ... I am hoping in the end to have an accurate description of what is observed by the data and I hope to publish the equations that give the model. Thank you once again.
Posted 1 year ago
 Thank you, it's much more clear now.As a teacher of graphing and visualization, might I make a proposal that you properly label chart axes with units and also label charted values (which colour is which variable).This makes it much easier to instantly understand the graph.Thanks for your efforts and sharing them.
Posted 1 year ago
 Hello,No, there is a bit of misuderstanding I see. I answer your questions the best I can one by one:1) That is correct ... here is the webpage with their data and graphics: https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/\bda7594740fd40299423467b48e9ecf62) No, my model is a model for a susceptible population of 90000 (it can be from anywhere). It fits the JHU data. If you read the rest of my post, you will see that I explain the main effect of the containment measures is to reduce the effective susceptible population. So although a city population might be 11M, with the containment measures, that number dropped to an effective susceptible population of about 90000. It is a model of the JHU data. There is no data for Helsinki yet, as we don't have any spread here at the moment. I hope this is clear enough for now.3) Blue is the number of infections at a moment of time. It is time dependent. It is what is called the I curve. It is NOT the total number of confirmed infections; you need to think of it (I won't go into the math details) as that number minus the recoveries and fatalities in your data, to get something similar to the red dots.4) Magenta is the NUMBER (NOT percentage) of cases that are resistant, essentially those that are not infected any more plus the fatalities, among the 90000 susceptible ones. Horizontal axis scale is days, that is standard.You have to keep in mind that this model works provided that things stay in check and there are no new paths of infection ... otherwise, you still have a huge uninfected population out there which could become susceptible. The difficulty now is to keep this under control.So no, I apologize but you did not get it right, but hopefully this helps. I hope this is useful. Especially because it has been hard to get a good fit for the data. Thanks for your questions.
Posted 1 year ago
 Enrique, can you confirm the following:1) You based your model on Mainland China Infections numbers from Johns Hopkins CSSE (JHU)2) Your graph is then a model run on the Helsinki population with 90 000 susceptible being the starting number for the population3) Blue is the number of confirmed infections (?) still ongoing (?) (i.e. not total, not cumulative all, not daily)?4) Magenta is the (percentage?) of recovered (axis scale missing?)Did I get this right?Thanks for the model!
Posted 1 year ago
 You are of course right about all of what you say. My goal was to try to find a phenomenological reason with implications about the values of the parameters that would make it possible for me to model the data, and I succeeded in that sense. I agree with you in that one has to be cautious about the scaling. But for the data that we have, we have some kind of model. I will update this data daily and see if it continues to fit the model. I expect it will and what the model predicts is in line with the expectations of some epidemiologists.Right, real data is messy ... you have to do the best you can with it. I think that what I did somewhat works.About recovery, it should mean "not infectious" given the tests that the patients need to go through to be discharged. However, there are cases documented when discharged individuals test positive again later on, leading to the suspicion that the disease might be biphasic in some instances.And correct, each focus will probably need its own, if similar, parameters. I am tracking several of the new foci, and we will see what happens with time. Unfortunately, it is harder to get data broken up into clusters of interest. Surely it can be obtained, but it is not easy to get it.
Posted 1 year ago
 Your points about the susceptible population size are well taken. Unlike modeling chemical reactions in a beaker (the math is the same), one cannot assume instantaneous and uniform mixing in epidemiological models. The population size does have an effect depending on the type in incidence used for the force of infection, so one cannot always simply scale the model for larger or smaller populations.I found an article in The Guardian about the change in counting methods in Hubei. Real data are invariable messy. Understanding exactly what the data are is crucial to modeling them accurately. For example, does "recovered" mean no longer infectious in the epidemiological sense, or does it mean asymptomatic in the medical sense? And, of course, there is always the question of unreported cases to deal with in these retrospective analyses.I'm quite certain that one will have to model each focus or cluster of infections individually. The derived parameters should be comparable, but are most likely not transferable.
Posted 1 year ago
 Thanks for adding the additional detail about the model, it helps one understand the results better.It looks like you are using the data from Hubei, based on the large number of confirmed cases. However, when I downloaded the JHU data from github (https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_time_series) last evening, I got similar but noticeably different values. What is your data source, and what preprocessing did you do? What was the adjustment that you made?What assumptions did you make when choosing a susceptible population size of 180,000? The population of Hubei is very much larger: In[22]:= WolframAlpha["population Hubei China", "Result"] Out[22]= Quantity[58160000, "People"] It will have a dramatic effect on the dynamics, especially if one uses mass action incidence for the force of infection as opposed to standard incidence. Which did you use? Models for the SARS epidemic used mass action incidence.I have found these data quite challenging to model. My current model has 7 compartments and includes quarantine. It's still very preliminary, and I hope to have it ready next week, but here is a peek at its structure (as a Petri net);