The goal of rscopus is to provide an R Scopus Database API Interface.
In order to use this package, you need an API key from https://dev.elsevier.com/sc_apis.html. You should login from your institution and go to Create API Key. You need to provide a website URL and a label, but the website can be your personal website, and agree to the terms of service.
rscopus key
. Add a website. http://example.com is fine if you do not have a
site.Elsevier_API = "API KEY GOES HERE"
to
~/.Renviron
file, or add
export Elsevier_API=API KEY GOES HERE
to your
~/.bash_profile
.Alternatively, you you can either set the API key using
rscopus::set_api_key
or by
options("elsevier_api_key" = api_key)
. You can access the
API key using rscopus::get_api_key
.
You should be able to test out the API key using the interactive Scopus APIs.
The API Key is bound to a set of IP addresses, usually bound to your institution. Therefore, if you are using this for a Shiny application, you must host the Shiny application from your institution servers in some way. Also, you cannot access the Scopus API with this key if you are offsite and must VPN into the server or use a computing cluster with an institution IP.
This is a basic example which shows you how to solve a common problem:
library(rscopus)
library(dplyr)
authorized = is_elsevier_authorized()
if (have_api_key()) {
x = abstract_retrieval("S1053811915002700", identifier = "pii",
verbose = FALSE)
res = bibtex_core_data(x)
cat(res)
if (authorized) {
res = author_df(last_name = "Muschelli", first_name = "John", verbose = FALSE, general = FALSE)
names(res)
head(res[, c("title", "journal", "description")])
unique(res$au_id)
unique(as.character(res$affilname_1))
all_dat = author_data(last_name = "Muschelli",
first_name = "John", verbose = FALSE, general = TRUE)
res2 = all_dat$df
res2 = res2 %>%
rename(journal = `prism:publicationName`,
title = `dc:title`,
description = `dc:description`)
head(res[, c("title", "journal", "description")])
}
}
#> @article{Muschelli2015Validatedimages,
#> author = {John Muschelli and Natalie L. Ullman and W. Andrew Mould and Paul Vespa and Daniel F. Hanley and Ciprian M. Crainiceanu},
#> address = {David Geffen School of Medicine at UCLA; Johns Hopkins Bloomberg School of Public Health; Johns Hopkins Medical Institutions},
#> title = {Validated automatic brain extraction of head CT images},
#> journal = {NeuroImage},
#> year = {2015},
#> volume = {114},
#> number = {},
#> pages = {379-385},
#> doi = {10.1016/j.neuroimage.2015.03.074},
#> abstract = {Background: X-ray computed tomography (CT) imaging of the brain is commonly used in diagnostic settings. Although CT scans are primarily used in clinical practice, they are increasingly used in research. A fundamental processing step in brain imaging research is brain extraction - the process of separating the brain tissue from all other tissues. Methods for brain extraction have either been 1) validated but not fully automated, or 2) fully automated and informally proposed, but never formally validated. Aim: To systematically analyze and validate the performance of FSL's brain extraction tool (BET) on head CT images of patients with intracranial hemorrhage. This was done by comparing the manual gold standard with the results of several versions of automatic brain extraction and by estimating the reliability of automated segmentation of longitudinal scans. The effects of the choice of BET parameters and data smoothing is studied and reported. Methods: All images were thresholded using a 0-100Hounsfield unit (HU) range. In one variant of the pipeline, data were smoothed using a 3-dimensional Gaussian kernel (σ=1mm3) and re-thresholded to 0-100HU; in the other, data were not smoothed. BET was applied using 1 of 3 fractional intensity (FI) thresholds: 0.01, 0.1, or 0.35 and any holes in the brain mask were filled.For validation against a manual segmentation, 36 images from patients with intracranial hemorrhage were selected from 19 different centers from the MISTIE (Minimally Invasive Surgery plus recombinant-tissue plasminogen activator for Intracerebral Evacuation) stroke trial. Intracranial masks of the brain were manually created by one expert CT reader. The resulting brain tissue masks were quantitatively compared to the manual segmentations using sensitivity, specificity, accuracy, and the Dice Similarity Index (DSI). Brain extraction performance across smoothing and FI thresholds was compared using the Wilcoxon signed-rank test. The intracranial volume (ICV) of each scan was estimated by multiplying the number of voxels in the brain mask by the dimensions of each voxel for that scan. From this, we calculated the ICV ratio comparing manual and automated segmentation: ICVautomatedICVmanual.To estimate the performance in a large number of scans, brain masks were generated from the 6 BET pipelines for 1095 longitudinal scans from 129 patients. Failure rates were estimated from visual inspection. ICV of each scan was estimated and an intraclass correlation (ICC) was estimated using a one-way ANOVA. Results: Smoothing images improves brain extraction results using BET for all measures except specificity (all p<. 0.01, uncorrected), irrespective of the FI threshold. Using an FI of 0.01 or 0.1 performed better than 0.35. Thus, all reported results refer only to smoothed data using an FI of 0.01 or 0.1. Using an FI of 0.01 had a higher median sensitivity (0.9901) than an FI of 0.1 (0.9884, median difference: 0.0014, p<. 0.001), accuracy (0.9971 vs. 0.9971; median difference: 0.0001, p<. 0.001), and DSI (0.9895 vs. 0.9894; median difference: 0.0004, p<. 0.001) and lower specificity (0.9981 vs. 0.9982; median difference: -. 0.0001, p<. 0.001). These measures are all very high indicating that a range of FI values may produce visually indistinguishable brain extractions. Using smoothed data and an FI of 0.01, the mean (SD) ICV ratio was 1.002 (0.008); the mean being close to 1 indicates the ICV estimates are similar for automated and manual segmentation.In the 1095 longitudinal scans, this pipeline had a low failure rate (5.2%) and the ICC estimate was high (0.929, 95% CI: 0.91, 0.945) for successfully extracted brains. Conclusion: BET performs well at brain extraction on thresholded, 1mm3 smoothed CT images with an FI of 0.01 or 0.1. Smoothing before applying BET is an important step not previously discussed in the literature. Analysis code is provided.}}
#> Warning: 'entries_to_df' is deprecated.
#> Use 'gen_entries_to_df' instead.
#> See help("Deprecated")
#> title
#> 1 Comparing Step-Counting Algorithms for High-Resolution Wrist Accelerometry Data in Older Adults in the ARIC Study
#> 2 Fine-Mapping the Association of Acute Kidney Injury With Mean Arterial and Central Venous Pressures During Coronary Artery Bypass Surgery
#> 3 Comparing Step Counting Algorithms for High-Resolution Wrist Accelerometry Data in NHANES 2011-2014
#> 4 Objectively Measured Physical Activity Using Wrist-Worn Accelerometers as a Predictor of Incident Alzheimer’s Disease in the UK Biobank
#> 5 Assessment of Renal Vein Stasis Index by Transesophageal Echocardiography During Cardiac Surgery: A Feasibility Study
#> 6 Occurrence of Low Cardiac Index During Normotensive Periods in Cardiac Surgery: A Prospective Cohort Study Using Continuous Noninvasive Cardiac Output Monitoring
#> journal
#> 1 Journals of Gerontology Series A Biological Sciences and Medical Sciences
#> 2 Anesthesia and Analgesia
#> 3 Medicine and Science in Sports and Exercise
#> 4 Journals of Gerontology Series A Biological Sciences and Medical Sciences
#> 5 Anesthesia and Analgesia
#> 6 Anesthesia and Analgesia
#> description
#> 1 Article
#> 2 Article
#> 3 Article
#> 4 Article
#> 5 Letter
#> 6 Article