Search
Please activate JavaScript to enable the search functionality.
From here you can search these documents. Enter your search words into the box below and click "search". Note that the search function will automatically search for all of the words. Pages containing fewer words won't appear in the result list.
Search Results
Search finished, found 840 page(s) matching the search query.
-
sklearn.metrics.pairwise.paired_distances
```python sklearn.metrics.pairwise.paired_distances(X, Y, *, metric='euclidean', **kwds) ``` [源
-
sklearn.datasets.make_low_rank_matrix
```python sklearn.datasets.make_low_rank_matrix(n_samples=100, n_features=100, *, effective_rank=10
-
sklearn.decomposition.NMF
```python class sklearn.decomposition.NMF(n_components=None, *, init=None, solver='cd', beta_loss='f
-
sklearn.ensemble.GradientBoostingRegressor
```python class sklearn.ensemble.GradientBoostingRegressor(*, loss='ls', learning_rate=0.1, n_estim
-
sklearn.model_selection.train_test_split
``` sklearn.model_selection.train_test_split(* arrays,** options ) ``` 将数组或矩阵切分为随机训练和测试子集。 这个快速实用程
-
sklearn.neighbors.KNeighborsRegressor
``` class sklearn.neighbors.KNeighborsRegressor(n_neighbors=5, *, weights='uniform', algorithm='auto
-
sklearn.feature_extraction.DictVectorizer
```python class sklearn.feature_extraction.DictVectorizer(*, dtype=
, separato -
sklearn.feature_selection.SelectFdr
```python class sklearn.feature_selection.SelectFdr(score_func=
, *, alpha=0.05) -
sklearn.preprocessing.StandardScaler
```python class sklearn.preprocessing.StandardScaler(*, copy=True, with_mean=True, with_std=True) ``
-
sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l1
```python sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l1() ``` 使用l1范数进行行归一化
-
sklearn.gaussian_process.ExpSineSquared
```python class sklearn.gaussian_process.kernels.ExpSineSquared(length_scale=1.0, periodicity=1.0,
-
光学聚类算法的演示
发现高密度的核心样本并从中扩展聚类。此示例使用生成的数据,以使聚类具有不同的密度。[`sklearn.cluster.OPTICS`](https://scikit-learn.org.cn/view
-
1.8 交叉分解
交叉分解模块包含两大类算法:偏最小二乘法(PLS)和典型相关分析(CCA)。 这几类算法对于寻找两个多元数据集之间的线性关系是很有用的:`fit`方法的`X`和`Y`参数是二维数组。 ![
-
增量PCA
当要分解的数据集太大,无法适应内存时,通常使用增量主成分分析(IPCA)代替主成分分析(PCA)。它仍然依赖于输入数据的特征,但是更改批处理的大小可以控制内存的使用。 此示例可用作视觉检查,以确
-
两类的Adaboost
这个例子里面拟合了一个AdaBoosted的决策支柱,使用的数据集是由两个‘高斯分位数’(参看[`sklearn.datasets.make_gaussian_quantiles`](https://