JoVE Logo

Sign In

The term "bootstrap" originated in the 19th century as a metaphor for self-improvement or achieving something independently, without external assistance. This concept extends to statistical bootstrapping, a self-contained method for estimating population parameters through resampling, even though it can be computationally intensive. Developed by the American statistician Dr. Bradley Efron in 1979, bootstrapping provides a robust way to perform inference when the original sample size is small or the data is complex.

Bootstrapping, also known as bootstrap resampling, simulates the sampling process by drawing multiple random samples, with replacement, from an existing dataset. Here, the original sample acts as a stand-in "population," and each resample is treated as an independent sample drawn from this "population." The underlying assumption is that the original sample is a good representation of the broader population. This approach is especially valuable when sample sizes are limited, as in studies with rare fossils, ancient genomic samples, tissues from rare diseases, endangered species studies, and unique experiments that cannot easily be repeated.

The basic process of bootstrapping includes the following steps:

  1. Collect an initial sample of size n from the population to estimate a parameter of interest.
  2. Treat this sample as a "population."
  3. Draw several new samples of size n, with replacement, from the original sample using random sampling.
  4. Use these "bootstrap resamples" for analysis to estimate the desired parameter.

Since resampling is with replacement, each new sample may include repeated values from the original data, reflecting the randomness in the resampling process. Bootstrapping typically requires a high number of resamples (often over 1,000) to achieve stable estimates, which can then be used to calculate statistics like the mean, variance, standard error, or confidence intervals for population parameters.

Bootstrapping is both cost-effective and accessible, offering a straightforward way to make inferences without needing additional data. However, it relies heavily on the original sample, meaning that any biases or errors in the original data will be present in the bootstrapped results as well.

From Chapter 13:

article

Now Playing

13.10 : Bootstrapping

Nonparametric Statistics

430 Views

article

13.1 : Introduction to Nonparametric Statistics

Nonparametric Statistics

442 Views

article

13.2 : Ranks

Nonparametric Statistics

26 Views

article

13.3 : Introduction to the Sign Test

Nonparametric Statistics

391 Views

article

13.4 : Sign Test for Matched Pairs

Nonparametric Statistics

23 Views

article

13.5 : Sign Test for Nominal Data

Nonparametric Statistics

17 Views

article

13.6 : Sign Test for Median of Single Population

Nonparametric Statistics

15 Views

article

13.7 : Wilcoxon Signed-Ranks Test for Matched Pairs

Nonparametric Statistics

19 Views

article

13.8 : Wilcoxon Signed-Ranks Test for Median of Single Population

Nonparametric Statistics

24 Views

article

13.9 : Wilcoxon Rank-Sum Test

Nonparametric Statistics

38 Views

article

13.11 : The Anderson-Darling Test

Nonparametric Statistics

416 Views

article

13.12 : Spearman's Rank Correlation Test

Nonparametric Statistics

457 Views

article

13.13 : Kendall's Tau Test

Nonparametric Statistics

406 Views

article

13.14 : Kruskal-Wallis Test

Nonparametric Statistics

387 Views

article

13.15 : Wald-Wolfowitz Runs Test I

Nonparametric Statistics

440 Views

See More

JoVE Logo

Privacy

Terms of Use

Policies

Research

Education

ABOUT JoVE

Copyright © 2025 MyJoVE Corporation. All rights reserved