The goal for this analysis was to better understand the shape of the developmental curve. To do this we used linear mixed effects models, so that the variance of individual meta-analyses could be accounted for.

Summary of investigations:

- Meta-analytic mixed-effects are promising, butâ€¦
- The random effects structure (specifically if we nest paper effects within meta-analysis) makes a big difference to the implied curve shape.
- Subsetting to only studies with multiple age groups improves performance.

Below is our data set separated by meta-analysis with four curve types: 1) linear, 2) logarithmic, 3) quadratic, and 4) both linear and logarithmic predictors.

We ran mixed effects regressions where meta-analysis (data set) was included as a random intercept for our four models. We then ran Chi-squared tests to compare the different models to one another. Model comparison found the quadratic model to be a better fit than the linear model and the logarithmic model, but no difference between the linear and logarithmic model. The two predictor model (linear and log) also outperformed both the single predictor linear model and the single predictor logarithmic model.

```
lin_lmer <- lmer(d_calc ~ mean_age +
(1 | dataset),
weights = 1/d_var_calc,
REML = F, data = all_data)
log_lmer <- lmer(d_calc ~ log(mean_age) +
(1 | dataset),
weights = 1/d_var_calc,
REML = F, data = all_data)
quad_lmer <- lmer(d_calc ~ I(mean_age^2) +
(1 | dataset),
weights = 1/d_var_calc,
REML = F, data = all_data)
lin_log_lmer <- lmer(d_calc ~ mean_age + log(mean_age) +
(1| dataset),
weights = 1/d_var_calc,
REML = F, data = all_data)
kable(coef(summary(lin_lmer)))
```

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | 0.1818852 | 0.0987245 | 1.842352 |

mean_age | 0.0006572 | 0.0001339 | 4.906331 |

`kable(coef(summary(log_lmer)))`

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | -0.2700640 | 0.2614831 | -1.032816 |

log(mean_age) | 0.1218559 | 0.0430248 | 2.832224 |

`kable(coef(summary(quad_lmer)))`

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | 0.3017539 | 0.0892738 | 3.380094 |

I(mean_age^2) | 0.0000006 | 0.0000001 | 5.707729 |

`kable(coef(summary(lin_log_lmer)))`

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | 1.1492594 | 0.3956334 | 2.904860 |

mean_age | 0.0012102 | 0.0002562 | 4.722740 |

log(mean_age) | -0.2054932 | 0.0814209 | -2.523838 |

`kable(anova(lin_lmer, log_lmer))`

Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|

lin_lmer | 4 | 1650.943 | 1670.113 | -821.4717 | 1642.943 | NA | NA | NA |

log_lmer | 4 | 1666.508 | 1685.677 | -829.2539 | 1658.508 | 0 | 0 | 1 |

`kable(anova(lin_lmer, quad_lmer))`

Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|

lin_lmer | 4 | 1650.943 | 1670.113 | -821.4717 | 1642.943 | NA | NA | NA |

quad_lmer | 4 | 1642.621 | 1661.790 | -817.3105 | 1634.621 | 8.322412 | 0 | 0 |

`kable(anova(log_lmer, quad_lmer))`

Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|

log_lmer | 4 | 1666.508 | 1685.677 | -829.2539 | 1658.508 | NA | NA | NA |

quad_lmer | 4 | 1642.621 | 1661.790 | -817.3105 | 1634.621 | 23.8868 | 0 | 0 |

`kable(anova(lin_lmer, lin_log_lmer))`

Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|

lin_lmer | 4 | 1650.943 | 1670.113 | -821.4717 | 1642.943 | NA | NA | NA |

lin_log_lmer | 5 | 1646.607 | 1670.569 | -818.3034 | 1636.607 | 6.33659 | 1 | 0.0118272 |

`kable(anova(log_lmer, lin_log_lmer))`

Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|

log_lmer | 4 | 1666.508 | 1685.677 | -829.2539 | 1658.508 | NA | NA | NA |

lin_log_lmer | 5 | 1646.607 | 1670.569 | -818.3034 | 1636.607 | 21.90098 | 1 | 2.9e-06 |

Focusing on the linear, log, and two predictor (linear and log) models, we built predictions for all complete meta-analyses. The plot separated by meta-analysis is presented below. The linear and two predictor models are pretty similar, mainly being different at very early ages.

```
all_data_preds <- all_data %>%
select(dataset, short_name, d_calc, d_var_calc, mean_age, study_ID) %>%
filter(complete.cases(.)) %>%
mutate(lin_pred = predict(lin_lmer)) %>%
mutate(log_pred = predict(log_lmer)) %>%
mutate(lin_log_pred = predict(lin_log_lmer))
```

Overall this method looks pretty good. It seems like the linear predictor in the two predictor model is smoothing the logs at the edges, and the random effects are working to capture cross data set variability.

We decided to expand our random effects structure to include paper (study_ID) nested in meta-analysis (data set) to capture the fact that a given meta-analysis has a specific set of papers, and may have multiple entries from the same paper. Here we focused on the two predictor model. The simpler random effects model from above is repeated here for easy comparison. A Chi-squared test finds a significant difference between the models, with the model with the more complex random effects structure having a lower AIC.

```
lin_log_paper_lmer <- lmer(d_calc ~ mean_age + log(mean_age) +
(1 | dataset / study_ID),
weights = 1/d_var_calc,
REML = F, data = all_data)
kable(coef(summary(lin_log_paper_lmer)))
```

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | 1.1129224 | 0.4302470 | 2.586705 |

mean_age | 0.0013871 | 0.0002696 | 5.144504 |

log(mean_age) | -0.2043289 | 0.0875258 | -2.334497 |

`kable(coef(summary(lin_log_lmer)))`

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | 1.1492594 | 0.3956334 | 2.904860 |

mean_age | 0.0012102 | 0.0002562 | 4.722740 |

log(mean_age) | -0.2054932 | 0.0814209 | -2.523838 |

`kable(anova(lin_log_lmer, lin_log_paper_lmer))`

Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|

lin_log_lmer | 5 | 1646.607 | 1670.569 | -818.3034 | 1636.607 | NA | NA | NA |

lin_log_paper_lmer | 6 | 1606.451 | 1635.205 | -797.2257 | 1594.451 | 42.15558 | 1 | 0 |

We then predicted from this new, more complex model and compare it to the model without the nested paper random effects structure. Initial testing found that including the nested random effect in the predict algorithm led to massive overfitting, so just the simple random effects structure was used to compute predictions. Below is the figure comparing the two curves. The curves are very similar, suggesting that while adding the nesting of paper does significantly improve the model, visually the difference is very small.

```
lin_log_paper_preds <- predict(lin_log_paper_lmer, re.form = ~ (1 | dataset))
all_data_preds <- all_data_preds %>%
mutate(lin_log_paper_pred = lin_log_paper_preds)
```

Another way to make the random effects structure more complex is to add a random slope to the model. Two more models were built with a random slope by mean age log transformed, one without the nesting of paper, and one with the nesting of paper. Summaries of the new models with the random slope and the previous models without the random slope are below. Model comparison found that the model with nested structure and the slope was better than the simplest model (no nesting, no slope). The model with both the nested structure and the slope was also better than than the model with only the slope. Based on this it appears that the best model is one that includes both the nested random effects structure and a random slope of mean age log transformed.

```
lin_log_slope_lmer <- lmer(d_calc ~ mean_age + log(mean_age) +
(log(mean_age)| dataset),
weights = 1/d_var_calc,
REML = F, data = all_data)
lin_log_paper_slope_lmer <- lmer(d_calc ~ mean_age + log(mean_age) +
(log(mean_age)| dataset / study_ID),
weights = 1/d_var_calc,
REML = F, data = all_data)
kable(coef(summary(lin_log_slope_lmer)))
```

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | -0.8785347 | 1.2320001 | -0.7130962 |

mean_age | 0.0007779 | 0.0003956 | 1.9663661 |

log(mean_age) | 0.1532819 | 0.2253929 | 0.6800653 |

`kable(coef(summary(lin_log_paper_slope_lmer)))`

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | -0.6890899 | 1.4606225 | -0.4717782 |

mean_age | 0.0012108 | 0.0004799 | 2.5231793 |

log(mean_age) | 0.1013289 | 0.2676234 | 0.3786251 |

`kable(coef(summary(lin_log_lmer)))`

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | 1.1492594 | 0.3956334 | 2.904860 |

mean_age | 0.0012102 | 0.0002562 | 4.722740 |

log(mean_age) | -0.2054932 | 0.0814209 | -2.523838 |

`kable(coef(summary(lin_log_paper_lmer)))`

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | 1.1129224 | 0.4302470 | 2.586705 |

mean_age | 0.0013871 | 0.0002696 | 5.144504 |

log(mean_age) | -0.2043289 | 0.0875258 | -2.334497 |

`kable(anova(lin_log_lmer, lin_log_slope_lmer))`

Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|

lin_log_lmer | 5 | 1646.607 | 1670.569 | -818.3034 | 1636.607 | NA | NA | NA |

lin_log_slope_lmer | 7 | 1624.237 | 1657.784 | -805.1187 | 1610.237 | 26.36942 | 2 | 1.9e-06 |

`kable(anova(lin_log_lmer, lin_log_paper_slope_lmer))`

Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|

lin_log_lmer | 5 | 1646.607 | 1670.569 | -818.3034 | 1636.607 | NA | NA | NA |

lin_log_paper_slope_lmer | 10 | 1573.875 | 1621.799 | -776.9375 | 1553.875 | 82.73179 | 5 | 0 |

`kable(anova(lin_log_paper_lmer, lin_log_slope_lmer))`

Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|

lin_log_paper_lmer | 6 | 1606.451 | 1635.205 | -797.2257 | 1594.451 | NA | NA | NA |

lin_log_slope_lmer | 7 | 1624.237 | 1657.784 | -805.1187 | 1610.237 | 0 | 1 | 1 |

`kable(anova(lin_log_paper_lmer, lin_log_paper_slope_lmer))`

Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|

lin_log_paper_lmer | 6 | 1606.451 | 1635.205 | -797.2257 | 1594.451 | NA | NA | NA |

lin_log_paper_slope_lmer | 10 | 1573.875 | 1621.799 | -776.9375 | 1553.875 | 40.57621 | 4 | 0 |

`kable(anova(lin_log_slope_lmer, lin_log_paper_slope_lmer))`

Df | AIC | BIC | logLik | deviance | Chisq | Chi Df | Pr(>Chisq) | |
---|---|---|---|---|---|---|---|---|

lin_log_slope_lmer | 7 | 1624.237 | 1657.784 | -805.1187 | 1610.237 | NA | NA | NA |

lin_log_paper_slope_lmer | 10 | 1573.875 | 1621.799 | -776.9375 | 1553.875 | 56.36237 | 3 | 0 |

As before, predictions were built for the new models and plotted on the data. Overall the models do look pretty similar in terms of their predicted fit of the data.

```
lin_log_slope_preds <- predict(lin_log_slope_lmer, re.form = ~ (log(mean_age) | dataset))
lin_log_paper_slope_preds <- predict(lin_log_paper_slope_lmer, re.form = ~ (log(mean_age) | dataset))
all_data_preds <- all_data_preds %>%
mutate(lin_log_slope_pred = lin_log_slope_preds) %>%
mutate(lin_log_paper_slope_pred = lin_log_paper_slope_preds)
```

One concern with using a random slope with meta-analysis nested by paper is that not all studies included more than one age group. To be sure that our random slope is really appropriate, here we look at only papers where more than one age group was tested.

To start we arbitrarily decided on two age groups, at least more than two months apart. Below we build a new model only looking at data where papers had at least two age groups. The structure of the model is the same as the model with papers nested in meta-analysis and a random slope of mean age log transformed.

```
multiage_data <- all_data_preds %>%
group_by(dataset, study_ID) %>%
mutate(count_ages = length(unique(floor(mean_age/2)))) %>%
filter(count_ages > 1)
lin_log_paper_slope_multiage_lmer <- lmer(d_calc ~ mean_age + log(mean_age) +
(log(mean_age) | dataset / study_ID),
weights = 1/d_var_calc,
REML = F, data = multiage_data)
kable(coef(summary(lin_log_paper_slope_multiage_lmer)))
```

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | -0.7928146 | 1.5960750 | -0.4967277 |

mean_age | 0.0015200 | 0.0005976 | 2.5433514 |

log(mean_age) | 0.0877137 | 0.2949065 | 0.2974288 |

Predictions were calculated for the new model and visually compared to the simple model on the full data set with no nested random effects and no random slope. The plot below suggests that for this reduced data set, the addition of the nested random effects and random slope does differ greatly from the simpler model.

```
lin_log_paper_slope_multiage_preds <- predict(lin_log_paper_slope_multiage_lmer, re.form = ~ (log(mean_age)| dataset))
multiage_data <- multiage_data %>%
ungroup %>%
select(dataset, short_name, study_ID, d_calc, mean_age, d_var_calc, lin_log_pred) %>%
mutate(lin_log_paper_slope_multiage_pred = lin_log_paper_slope_multiage_preds)
```

We also did the same analysis, but further reducing the data set to only papers that included at least three different age groups.

```
multiage_3_data <- all_data_preds %>%
group_by(dataset, study_ID) %>%
mutate(count_ages = length(unique(floor(mean_age/2)))) %>%
filter(count_ages > 2)
lin_log_paper_slope_multiage_3_lmer <- lmer(d_calc ~ mean_age + log(mean_age) +
(log(mean_age) | dataset / study_ID),
weights = 1/d_var_calc,
REML = F, data = multiage_3_data)
kable(coef(summary(lin_log_paper_slope_multiage_3_lmer)))
```

Estimate | Std. Error | t value | |
---|---|---|---|

(Intercept) | 0.4881677 | 2.0274048 | 0.2407845 |

mean_age | 0.0023811 | 0.0007098 | 3.3545042 |

log(mean_age) | -0.1847708 | 0.3778747 | -0.4889738 |

Predictions were calculated for the new model and visually compared to the simple model on the full data set with no nested random effects and no random slope. The plot below suggests that for this further reduced data set, the difference becomes the two models becomes even more apparent.

```
lin_log_paper_slope_multiage_3_preds <- predict(lin_log_paper_slope_multiage_3_lmer, re.form = ~ (log(mean_age)| dataset))
multiage_3_data <- multiage_3_data %>%
ungroup %>%
select(dataset, short_name, study_ID, d_calc, mean_age, d_var_calc, lin_log_pred) %>%
mutate(lin_log_paper_slope_multiage_3_pred = lin_log_paper_slope_multiage_3_preds)
```

In this report we sought to find the best curve to describe developmental data. We found that, regarding predictors, the best fit of the data appears to be a model that includes both mean age and mean age log transformed as predictors. We further improved the curve by using complex random effects structures in our regressions. We found that a more complicated random effects structure was best by: 1) nesting paper in meta-analysis, and 2) including a random slope by mean age log transformed. This structure was most appropriate when we only looked at papers that had at least two or three age groups, to ensure the random slope applied to all data analyzed.