Last updated on 2026-01-16 05:51:26 CET.
| Package | ERROR | NOTE | OK |
|---|---|---|---|
| bayestestR | 13 | ||
| insight | 1 | 12 | |
| modelbased | 13 | ||
| parameters | 1 | 1 | 11 |
| performance | 1 | 12 |
Current CRAN status: OK: 13
Current CRAN status: NOTE: 1, OK: 12
Version: 1.4.4
Check: package dependencies
Result: NOTE
Package suggested but not available for checking: 'fungible'
Flavor: r-oldrel-windows-x86_64
Current CRAN status: OK: 13
Current CRAN status: ERROR: 1, NOTE: 1, OK: 11
Version: 0.28.3
Check: package dependencies
Result: NOTE
Package suggested but not available for checking: ‘M3C’
Flavor: r-oldrel-macos-arm64
Version: 0.28.3
Check: package dependencies
Result: NOTE
Package suggested but not available for checking: 'EGAnet'
Flavor: r-oldrel-windows-x86_64
Version: 0.28.3
Check: tests
Result: ERROR
Running 'testthat.R' [40s]
Running the tests in 'tests/testthat.R' failed.
Complete output:
> library(parameters)
> library(testthat)
>
> test_check("parameters")
Starting 2 test processes.
> test-model_parameters.afex_aov.R: Contrasts set to contr.sum for the following variables: condition, talk
> test-model_parameters.afex_aov.R: Contrasts set to contr.sum for the following variables: condition, talk
> test-model_parameters.afex_aov.R: Contrasts set to contr.sum for the following variables: treatment, gender
> test-model_parameters.aov_es_ci.R:
Error:
! testthat subprocess exited in file
'test-model_parameters.aov_es_ci.R'.
Caused by error:
! R session crashed with exit code -1073741819
Backtrace:
▆
1. └─testthat::test_check("parameters")
2. └─testthat::test_dir(...)
3. └─testthat:::test_files(...)
4. └─testthat:::test_files_parallel(...)
5. ├─withr::with_dir(...)
6. │ └─base::force(code)
7. ├─testthat::with_reporter(...)
8. │ └─base::tryCatch(...)
9. │ └─base (local) tryCatchList(expr, classes, parentenv, handlers)
10. │ └─base (local) tryCatchOne(expr, names, parentenv, handlers[[1L]])
11. │ └─base (local) doTryCatch(return(expr), name, parentenv, handler)
12. └─testthat:::parallel_event_loop_chunky(queue, reporters, ".")
13. └─queue$poll(Inf)
14. └─base::lapply(...)
15. └─testthat (local) FUN(X[[i]], ...)
16. └─private$handle_error(msg, i)
17. └─cli::cli_abort(...)
18. └─rlang::abort(...)
Execution halted
Flavor: r-oldrel-windows-x86_64
Current CRAN status: ERROR: 1, OK: 12
Version: 0.15.3
Check: tests
Result: ERROR
Running ‘testthat.R’ [18s/10s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(performance)
>
> test_check("performance")
Starting 2 test processes.
> test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be
> test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be
> test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be
> test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be
> test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be
> test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be
> test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be
> test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be
> test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be
> test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-check_itemscale.R: Some of the values are negative. Maybe affected items need to be
> test-check_itemscale.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-check_collinearity.R: NOTE: 2 fixed-effect singletons were removed (2 observations).
Saving _problems/test-check_collinearity-157.R
Saving _problems/test-check_collinearity-185.R
> test-check_overdispersion.R: Overdispersion detected.
> test-check_overdispersion.R: Underdispersion detected.
> test-check_outliers.R: No outliers were detected (p = 0.238).
> test-glmmPQL.R: iteration 1
> test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be
> test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be
> test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-item_discrimination.R: Some of the values are negative. Maybe affected items need to be
> test-item_discrimination.R: reverse-coded, e.g. using `datawizard::reverse()`.
> test-performance_aic.R: Model was not fitted with REML, however, `estimator = "REML"`. Set
> test-performance_aic.R: `estimator = "ML"` to obtain identical results as from `AIC()`.
[ FAIL 2 | WARN 0 | SKIP 41 | PASS 443 ]
══ Skipped tests (41) ══════════════════════════════════════════════════════════
• On CRAN (36): 'test-bootstrapped_icc_ci.R:2:3',
'test-bootstrapped_icc_ci.R:44:3', 'test-binned_residuals.R:163:3',
'test-binned_residuals.R:190:3', 'test-check_convergence.R:1:1',
'test-check_dag.R:1:1', 'test-check_distribution.R:1:1',
'test-check_itemscale.R:1:1', 'test-check_itemscale.R:100:1',
'test-check_model.R:1:1', 'test-check_collinearity.R:193:1',
'test-check_collinearity.R:226:1', 'test-check_residuals.R:2:3',
'test-check_singularity.R:2:3', 'test-check_singularity.R:30:3',
'test-check_zeroinflation.R:73:3', 'test-check_zeroinflation.R:112:3',
'test-check_outliers.R:115:3', 'test-check_outliers.R:339:3',
'test-helpers.R:1:1', 'test-item_omega.R:1:1', 'test-item_omega.R:31:3',
'test-compare_performance.R:1:1', 'test-mclogit.R:56:1',
'test-model_performance.bayesian.R:1:1',
'test-model_performance.lavaan.R:1:1', 'test-model_performance.merMod.R:2:3',
'test-model_performance.merMod.R:37:3', 'test-model_performance.psych.R:1:1',
'test-model_performance.rma.R:36:1', 'test-performance_reliability.R:23:3',
'test-pkg-ivreg.R:1:1', 'test-r2_bayes.R:39:3', 'test-r2_nagelkerke.R:35:3',
'test-rmse.R:39:3', 'test-test_likelihoodratio.R:55:1'
• On Mac (4): 'test-check_predictions.R:1:1', 'test-icc.R:1:1',
'test-nestedLogit.R:1:1', 'test-r2_nakagawa.R:1:1'
• getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:300:3'
══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-check_collinearity.R:157:3'): check_collinearity | afex ──────
Expected `expect_message(ccoW <- check_collinearity(aW))` to throw a warning.
── Failure ('test-check_collinearity.R:185:3'): check_collinearity | afex ──────
Expected `expect_message(ccoW <- check_collinearity(aW))` to throw a warning.
[ FAIL 2 | WARN 0 | SKIP 41 | PASS 443 ]
Error:
! Test failures.
Execution halted
Flavor: r-release-macos-arm64