MASS
, Waves, sparse matrices, etc)pnorm
- cumulative distribution for xqnorm
- inverse of pnorm (from probability gives x)dnorm
- distribution densityrnorm
- random number from normal distributiondistributions
Available for variety of distributions: punif
(uniform), pbinom
(binomial), pnbinom
(negative binomial), ppois
(poisson), pgeom
(geometric), phyper
(hyper-geometric), pt
(T distribution), pf (F distribution)
10 random values from the Normal distribution with mean 10 and standard deviation 5:
rnorm(10, mean=10, sd=5)
[1] 5.693620 4.429255 0.442925 9.223783 14.217208 10.117813 10.052955 9.347432
[9] 7.879201 3.260119
dnorm(10, mean=10, sd=5)
[1] 0.07978846
dnorm(100, mean=10, sd=5)
[1] 3.517499e-72
pnorm(10, mean=10, sd=5)
[1] 0.5
pnorm()
:qnorm(0.5, mean=10, sd=5)
[1] 10
qnorm(0.95, mean=0, sd=1)
[1] 1.644854
Recall our histogram of Wind Speed from yesterday:
weather <- read.csv("ozone.csv")
hist(weather$Wind, col="steelblue", xlab="Wind Speed",
main="Distribution of Wind Speed",
breaks = 20, freq=FALSE)
windMean <- mean(weather$Wind)
windSD <- sd(weather$Wind)
dnorm(10, mean=windMean, sd=windSD)
[1] 0.1132311
points
as we just saw:hist(weather$Wind, col="steelblue", xlab="Wind Speed",
main="Distribution of Wind Speed",
breaks = 20, freq=FALSE)
points(10, dnorm(10, mean=windMean, sd=windSD),
col="red", pch=16)
lines
in this case rather than points
xs <- c(0,5,10,15,20)
ys <- dnorm(xs, mean=windMean, sd=windSD)
hist(weather$Wind, col="steelblue", xlab="Wind Speed",
main="Distribution of Wind Speed",
breaks = 20, freq=FALSE)
lines(xs, ys, col="red")
seq()
function?shapiro.test
(but not really recommended by statisticians)hist(weather$Wind,col="steelblue",xlab="Wind Speed",
main="Distribution of Wind Speed",breaks = 20,freq=FALSE)
xs <- seq(0,20, length.out = 10000)
ys <- dnorm(xs, mean=windMean,sd=windSD)
lines(xs,ys,col="red")
t=ˉx−μ0s/√(n)
t <- (windMean - 2) / (windSD/sqrt(length(weather$Wind)))
t
[1] 27.93897
t.test()
function to compute the statistic and corresponding p-valuet.test(weather$Wind, mu=2)
One Sample t-test
data: weather$Wind
t = 27.939, df = 152, p-value < 2.2e-16
alternative hypothesis: true mean is not equal to 2
95 percent confidence interval:
9.394804 10.520229
sample estimates:
mean of x
9.957516
?var.test
?t.test
?wilcox.test
?prop.test
?cor.test
?chisq.test
?fisher.test
patients
dataset tend to be heavier than womenWe need to run this if we don’t have the patients data in our R environment
patients <- read.delim("patient-info.txt")
We can test if the variance of the two groups is the same
var.test(patients$Weight~patients$Sex)
F test to compare two variances
data: patients$Weight by patients$Sex
F = 0.14216, num df = 49, denom df = 44, p-value = 3.59e-10
alternative hypothesis: true ratio of variances is not equal to 1
95 percent confidence interval:
0.07900337 0.25344664
sample estimates:
ratio of variances
0.1421572
t.test(patients$Weight~patients$Sex, var.equal=FALSE)
Welch Two Sample t-test
data: patients$Weight by patients$Sex
t = -11.204, df = 55.168, p-value = 7.786e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-15.62079 -10.88094
sample estimates:
mean in group Female mean in group Male
68.95980 82.21067
If were unwilling to make a assumption of normality, a non-parametric test could be used.
wilcox.test(patients$Weight~patients$Sex)
Wilcoxon rank sum test with continuity correction
data: patients$Weight by patients$Sex
W = 59, p-value = 1.993e-15
alternative hypothesis: true location shift is not equal to 0
lm()
:
example(lm)
lm
is really useful for plotting lines of best fit to XY data, in order to determine intercept, gradient and Pearson’s correlation coefficient
Three steps to plotting with a best fit line:
abline()
functionLet’s see a toy example:-
x <- c(1, 2.3, 3.1, 4.8, 5.6, 6.3)
y <- c(2.6, 2.8, 3.1, 4.7, 5.1, 5.3)
plot(x,y, xlim=c(0,10), ylim=c(0,10))
~
is used to define a formula; i.e. “y is given by x”
x
and y
in the plot
and lm
expressionsplot(x,y, xlim=c(0,10), ylim=c(0,10))
myModel <- lm(y~x)
abline(myModel, col="red")
The generic summary
function give an overview of the model
summary(myModel)
Call:
lm(formula = y ~ x)
Residuals:
1 2 3 4 5 6
0.33159 -0.22785 -0.39520 0.21169 0.14434 -0.06458
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.68422 0.29056 5.796 0.0044 **
x 0.58418 0.06786 8.608 0.0010 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.3114 on 4 degrees of freedom
Multiple R-squared: 0.9488, Adjusted R-squared: 0.936
F-statistic: 74.1 on 1 and 4 DF, p-value: 0.001001
names(myModel) # Names of the objects within myModel
[1] "coefficients" "residuals" "effects" "rank" "fitted.values"
[6] "assign" "qr" "df.residual" "xlevels" "call"
[11] "terms" "model"
coef(myModel) # Coefficients
(Intercept) x
1.6842239 0.5841843
resid(myModel) # Residuals
1 2 3 4 5 6
0.33159186 -0.22784770 -0.39519512 0.21169160 0.14434418 -0.06458482
fitted(myModel) # Fitted values
1 2 3 4 5 6
2.268408 3.027848 3.495195 4.488308 4.955656 5.364585
residuals(myModel) + fitted(myModel) # what values does this give?
1 2 3 4 5 6
2.6 2.8 3.1 4.7 5.1 5.3
You can also get some diagnostic information on the model.
par(mfrow=c(2,2))
plot(myModel)
x
and z
, and one response variable y
y
and x
using a tilde ~
, placing the response variable on the left of the tilde and the explanatory variables on the right:
y~x #If x is continuous, this is linear regression
y ~ x
y~x #If x is categorical, ANOVA
y ~ x
y~x+z #If x and z are continuous, multiple regression
y ~ x + z
y~x+z #If x and z are categorical, two-way ANOVA
y ~ x + z
y~x+z+x:z # : is the symbol for the interaction term
y ~ x + z + x:z
y~x*z # * is a shorthand for x+z+x:z
y ~ x * z
plot(weather$Temp, weather$Ozone,xlab="Temperature",ylab="Ozone level",pch=16)
paste
can be used to join strings of text together, or variablespaste("Hello","World")
[1] "Hello World"
age <- 35
paste("My age is", age)
[1] "My age is 35"
## Your Answer Here ##
Correlation != Causation
http://tylervigen.com/spurious-correlations
So if I want to win a nobel prize, I should eat even more chocolate?!?!?
But no-one would ever take such trends seriously….would they?
Cutting-down on Ice Cream was recommended as a safeguard against polio!