GRATIS
NaN vía Coursera
GRATIS

Resampling, Selection and Splines

  • money

    Cursos gratis (Auditar)

    question-mark
  • earth

    Inglés

  • folder

    Siempre Abierto

  • certificate

    Guía de Registro en Coursera

    arrow
Acerca de este curso

  • Welcome and Review
    • Welcome to our Resampling, Selection, and Splines class! In this course, we will dive deep into these key topics in statistical learning and explore how they can be applied to data science. The module provides an introductory overview of the course and introduces the course instructor.
  • Generalized Least Squares
    • In this module, we will turn our attention to generalized least squares (GLS). GLS is a statistical method that extends the ordinary least squares (OLS) method to account for heteroscedasticity and serial correlation in the error terms. Heteroscedasticity is the condition where the variance of the errors is not constant across all levels of the predictor variables, while serial correlation is the condition where the errors are correlated across time or space. GLS has many practical applications, such as in finance for modeling asset returns, in econometrics for modeling time series data, and in spatial analysis for modeling spatially correlated data. By the end of this module, you will have a good understanding of how GLS works and when it is appropriate to use it. You will also be able to implement GLS in R using the gls() function in the nlme package.
  • Shrink Methods
    • In this module, we will explore ridge regression, LASSO, and principal component analysis (PCA). These techniques are widely used for regression and dimensionality reduction tasks in machine learning and statistics.
  • Cross-Validation
    • This week, we will be exploring the concept of cross-validation, a crucial technique used to evaluate and compare the performance of different statistical learning models. We will explore different types of cross-validation techniques, including k-fold cross-validation, leave-one-out cross-validation, and stratified cross-validation. We will discuss their strengths, weaknesses, and best practices for implementation. Additionally, we will examine how cross-validation can be used for model selection and hyperparameter tuning.
  • Bootstrapping
    • For our final module, we will explore bootstrapping. Bootstrapping is a resampling technique that allows us to gain insights into the variability of statistical estimators and quantify uncertainty in our models. By creating multiple simulated datasets through resampling, we can explore the distribution of sample statistics, construct confidence intervals, and perform hypothesis testing. Bootstrapping is particularly useful when parametric assumptions are hard to meet or when we have limited data. By the end of this week, you will have an understanding of bootstrapping and its practical applications in statistical learning.