Can blockchain save us?
claym711 said:
It has been long enough. If it was going to take off here, it would have by now. Wuhan may be devastated but it's not going to be anywhere near that serious here.
I have a decent sense of most of that but am not following from that to what the other poster said. The data released doesn't fit well enough to be completely formulaic unless someone is adjusting them just to throw noise in the system. I'm questioning the statement that it is statistically impossible for those numbers to be the actual numbersC@LAg said:cut and pasted from internet. you should get the gist of it from this:GE said:
Explain please
R-square tell us how much variation in the dependent variable is accounted for by the regression model, the adjusted value tells us how much variance in the dependent variable would be accounted for if the model had been derived from the population from which the sample was taken. Specifically, it reflects the goodness of fit of the model to the population taking into account the sample size and the number of predictors used.
A data-set should always be explored to see if it meets the assumptions of the statistical methods applied. The multivariate data analyses we are intending assume normality, linearity and absence of multicollinearity .
Normality refers to the shape of the data distribution for an individual variable and its correspondence to the normal distribution. In this study, the assumptions of normality were examined by looking at histograms of the data, and by checking skewness and kurtosis. The distribution is considered normal when it is bell shaped and values of skewness and kurtosis are close to zero.
The linearity of the relationship between the dependent and independent variables represents the way changes in the dependent variable are associated with the independent variables, namely, that there is a straight-line relationship between the independent variables and dependent variable. This assumption is essential as regression analysis only tests for a linear relationship between the independent variables and dependent variable. Pearson correlation can capture the linear association between variables.
Multicollinearity is the existence of a strong linear relationship among variables, and prevents the effect of each variable being identified. Many Scholars recommend examining the variable inflation factor (VIF) and tolerance level (TOL) as a tool for of multicollinearity diagnostics. VIF represents the increase in variance that exists due to collinearities and interrelationships among the variables. VIFs larger than 10 indicate strong multicollinearity and as a rule of thumb VIFs should be less than 0.1
An R-square=1 indicates perfect fit. That is, you've explained all of the variance that there is to explain. you can always get R-square=1 if you have a number of predicting variables equal to the number of observations, or if you've estimated an intercept the number of observations .
If the R was a law of nature I would agreeC@LAg said:aren't they saying the R went from 2.1 to almost 1.GE said:I have a decent sense of most of that but am not following from that to what the other poster said. The data released doesn't fit well enough to be completely formulaic unless someone is adjusting them just to throw noise in the system. I'm questioning the statement that it is statistically impossible for those numbers to be the actual numbersC@LAg said:GE said:
Explain please
thus it is statistically impossible unless someone is lazily fudging the numbers.
GE said:I have a decent sense of most of that but am not following from that to what the other poster said. The data released doesn't fit well enough to be completely formulaic unless someone is adjusting them just to throw noise in the system. I'm questioning the statement that it is statistically impossible for those numbers to be the actual numbersC@LAg said:cut and pasted from internet. you should get the gist of it from this:GE said:
Explain please
R-square tell us how much variation in the dependent variable is accounted for by the regression model, the adjusted value tells us how much variance in the dependent variable would be accounted for if the model had been derived from the population from which the sample was taken. Specifically, it reflects the goodness of fit of the model to the population taking into account the sample size and the number of predictors used.
A data-set should always be explored to see if it meets the assumptions of the statistical methods applied. The multivariate data analyses we are intending assume normality, linearity and absence of multicollinearity .
Normality refers to the shape of the data distribution for an individual variable and its correspondence to the normal distribution. In this study, the assumptions of normality were examined by looking at histograms of the data, and by checking skewness and kurtosis. The distribution is considered normal when it is bell shaped and values of skewness and kurtosis are close to zero.
The linearity of the relationship between the dependent and independent variables represents the way changes in the dependent variable are associated with the independent variables, namely, that there is a straight-line relationship between the independent variables and dependent variable. This assumption is essential as regression analysis only tests for a linear relationship between the independent variables and dependent variable. Pearson correlation can capture the linear association between variables.
Multicollinearity is the existence of a strong linear relationship among variables, and prevents the effect of each variable being identified. Many Scholars recommend examining the variable inflation factor (VIF) and tolerance level (TOL) as a tool for of multicollinearity diagnostics. VIF represents the increase in variance that exists due to collinearities and interrelationships among the variables. VIFs larger than 10 indicate strong multicollinearity and as a rule of thumb VIFs should be less than 0.1
An R-square=1 indicates perfect fit. That is, you've explained all of the variance that there is to explain. you can always get R-square=1 if you have a number of predicting variables equal to the number of observations, or if you've estimated an intercept the number of observations .
Show me the derivation of that being 1.000 - I havent calculated it but that's not how it looks to meNuclear Scramjet said:GE said:I have a decent sense of most of that but am not following from that to what the other poster said. The data released doesn't fit well enough to be completely formulaic unless someone is adjusting them just to throw noise in the system. I'm questioning the statement that it is statistically impossible for those numbers to be the actual numbersC@LAg said:cut and pasted from internet. you should get the gist of it from this:GE said:
Explain please
R-square tell us how much variation in the dependent variable is accounted for by the regression model, the adjusted value tells us how much variance in the dependent variable would be accounted for if the model had been derived from the population from which the sample was taken. Specifically, it reflects the goodness of fit of the model to the population taking into account the sample size and the number of predictors used.
A data-set should always be explored to see if it meets the assumptions of the statistical methods applied. The multivariate data analyses we are intending assume normality, linearity and absence of multicollinearity .
Normality refers to the shape of the data distribution for an individual variable and its correspondence to the normal distribution. In this study, the assumptions of normality were examined by looking at histograms of the data, and by checking skewness and kurtosis. The distribution is considered normal when it is bell shaped and values of skewness and kurtosis are close to zero.
The linearity of the relationship between the dependent and independent variables represents the way changes in the dependent variable are associated with the independent variables, namely, that there is a straight-line relationship between the independent variables and dependent variable. This assumption is essential as regression analysis only tests for a linear relationship between the independent variables and dependent variable. Pearson correlation can capture the linear association between variables.
Multicollinearity is the existence of a strong linear relationship among variables, and prevents the effect of each variable being identified. Many Scholars recommend examining the variable inflation factor (VIF) and tolerance level (TOL) as a tool for of multicollinearity diagnostics. VIF represents the increase in variance that exists due to collinearities and interrelationships among the variables. VIFs larger than 10 indicate strong multicollinearity and as a rule of thumb VIFs should be less than 0.1
An R-square=1 indicates perfect fit. That is, you've explained all of the variance that there is to explain. you can always get R-square=1 if you have a number of predicting variables equal to the number of observations, or if you've estimated an intercept the number of observations .
An R^2 value of 1.000 means the real world data is a perfect fit to a mathematical prediction model. This is fundamentally impossible because there is no such thing as a perfect fit. You will always have noise and data points that don't fit a model exactly. In other words, you will have data points that are regularly above or below the trendline with varying percentage differences. You may see an R^2 of 0.923 if you have a really good model, but never a 1.000. A perfect fit means every single data point fit the trendline almost exactly, so close that the R^2 is 1.000. This is not possible for real world data, which means the Chinese numbers are clearly just made up from a formula.
Exactly!k2aggie07 said:
R^2 is regression, it's a comparison of the measured data to the model.
In real life you don't get empirical data to have R^2 values of 0.999 unless you're measuring something for which you have an analytical solution.
What I mean is, if you're measuring a behavior against a known physical formula, what your regression is telling you is your measurement error.
When you're looking at statistical sampling, your best fit curve is created to fit your data - so the regression is telling you your model error.
If your regression is zero / R^2 is 1.000 that means you have no measurement error and no model error. Can't be, without rounding or general lack of precision. Or tampering.
Based on the reported 2-2.5 week incubation period, we're at least eight days out from knowing if we have a problem here in the US of A.claym711 said:
It has been long enough. If it was going to take off here, it would have by now. Wuhan may be devastated but it's not going to be anywhere near that serious here.
Your last comment only applies to the mortality rate doesn't itC@LAg said:Read his comment here:GE said:
Show me the derivation of that being 1.000 - I havent calculated it but that's not how it looks to me
look at the point graph, at the daily observations (the green line).
It has achieved a nearly perfect flat line (at 2.1%) for the last week.
Daily deviation is essentially 0, so the R^2 = 1.
GE said:Your last comment only applies to the mortality rate doesn't itC@LAg said:Read his comment here:GE said:
Show me the derivation of that being 1.000 - I havent calculated it but that's not how it looks to me
look at the point graph, at the daily observations (the green line).
It has achieved a nearly perfect flat line (at 2.1%) for the last week.
Daily deviation is essentially 0, so the R^2 = 1.
AG 2000' said:Based on the reported 2-2.5 week incubation period, we're at least eight days out from knowing if we have a problem here in the US of A.claym711 said:
It has been long enough. If it was going to take off here, it would have by now. Wuhan may be devastated but it's not going to be anywhere near that serious here.
AG 2000' said:Based on the reported 2-2.5 week incubation period, we're at least eight days out from knowing if we have a problem here in the US of A.claym711 said:
It has been long enough. If it was going to take off here, it would have by now. Wuhan may be devastated but it's not going to be anywhere near that serious here.
Ask himswimmerbabe11 said:
That's the second or third time you've attempted to call him out.. Do you think it will work this time?
claym711 said:AG 2000' said:Based on the reported 2-2.5 week incubation period, we're at least eight days out from knowing if we have a problem here in the US of A.claym711 said:
It has been long enough. If it was going to take off here, it would have by now. Wuhan may be devastated but it's not going to be anywhere near that serious here.
False. Incubation is not 14-18 days.
Madman said:
I understand why hygiene is important for preventing transmission but why does it affect the incubation period?
JJxvi said:
Seems bad for the CDC to recommend 14 day isolation for a 16 day incubation period disease...
A lot of cases are taking as long as 2-3 weeks to actually turn serious after first becoming symptomatic, as well.AG 2000' said:Madman said:
I understand why hygiene is important for preventing transmission but why does it affect the incubation period?
Incubation period was the wrong term to use. It's about time from exposure to exhibiting symptoms.