=======
torchml.decomposition¶
Classes¶
torchml.decomposition.PCA
¶
Description¶
Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD.
References¶
- The scikit-learn [documentation page] (https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html)
- Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions, arXiv:0909.4061 [math.NA; math.PR], 2009.
Arguments¶
n_components
: int, default=None Number of components to keep. if n_components is not set all components are kept:: n_components == min(n_samples, n_features)svd_solver
: {'auto', 'full', 'arpack', 'randomized'}, default='auto' The algorithm that runs SVD. If auto : The solver is selected by a default policy based onX.shape
andn_components
: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient 'randomized' method is enabled. Otherwise the exact full SVD is computed.
Example¶
import torch
from torchml.decomposition import PCA
X = torch.tensor([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
pca = PCA(n_components=2)
pca.fit(X)