+ ShareSpark.net » :::: Study / Technical / Programming Section » Ebooks And Students Zone » Video Tutorials
|- Reducing Dimensions in Data with scikit-learn

Reducing Dimensions in Data with scikit-learn

0 Members and 1 Guest are viewing this topic.

April 19, 2019, 07:08:40 PM
  • Legend Member
  • ****
  • Topic Author
  • Thank You: 13
  • Reputation: 3
  • Activity:
  • Country: au
  • Posts: 5942
  • Topics: 5946
  • Join Date: Dec 2018
  • ShareSpark Member

Reducing Dimensions in Data with scikit-learn
MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | Duration: 2.5 Hours | 274 MB
Genre: eLearning | Language: English

This course covers a wide range of the important techniques of dimensionality reduction and feature selection available in scikit-learn, allowing model builders to optimize model performance by reducing overfitting, save on model training time

Dimensionality Reduction is a powerful and versatile machine learning technique that can be used to improve the performance of virtually every ML model. Using dimensionality reduction, you can significantly speed up model training and validation, saving both time and money, as well as greatly reduce the risk of overfitting. In this course, Reducing Dimensions in Data with scikit-learn, you will gain the ability to design and implement an exhaustive array of feature selection and dimensionality reduction techniques in scikit-learn. First, you will learn the importance of dimensionality reduction, and understand the pitfalls of working with data of excessively high-dimensionality, often referred to as the curse of dimensionality. Next, you will discover how to implement feature selection techniques to decide which subset of the existing features we might choose to use, while losing as little information from the original, full dataset as possible. You will then learn important techniques for reducing dimensionality in linear data. Such techniques, notably Principal Components Analysis and Linear Discriminant Analysis, seek to re-orient the original data using new, optimized axes. The choice of these axes is driven by numeric procedures such as Eigenvalue and Singular Value Decomposition. You will then move to dealing with manifold data, which is non-linear and often takes the form of swiss rolls and S-curves. Such data presents an illusion of complexity, but is actually easily simplified by unrolling the manifold. Finally, you will explore how to implement a wide variety of manifold learning techniques including multi-dimensional scaling (MDS), isomap, and t-distributed Stochastic Neighbor Embedding (t-SNE). You will round out the course by comparing the results of these manifold unrolling techniques with different datasets, including images of faces and handwritten data. When you're finished with this course, you will have the skills and knowledge of Dimensionality Reduction needed to design and implement ways to mitigate the curse of dimensionality in scikit-learn.


Code: [Select]

Code: [Select]

Share this topic on FacebookShare this topic on GoogleShare this topic on MySpaceShare this topic on TwitterShare this topic on Yahoo


Related Topics

  Subject / Started by Replies Last post
0 Replies
Last post June 12, 2018, 10:09:54 AM
by kiteflyy
0 Replies
Last post November 20, 2018, 04:22:20 PM
by BaDshaH786
0 Replies
Last post January 02, 2019, 11:51:55 AM
by yanfan001
0 Replies
Last post March 04, 2019, 06:07:59 PM
by Jonathan
0 Replies
Last post March 12, 2019, 02:46:37 PM
by Jonathan

Back to top