-
feisher
PCA的步骤:
1 先将数据中心化;
2 求得的协方差矩阵;
3 求出协方差矩阵的特征值与特征向量;
4 将特征值与特征向量进行排序;
5 根据要降维的维数d’,求得要降维的投影方向;
6 求出降维后的数据;
(PCA steps: 1 of the first data center 2 covariance matrix obtained 3 obtained covariance matrix eigenvalues and eigenvectors 4 eigenvalues and eigenvectors will be sorted 5 according to the dimension of dimension reduction d ' , seek to reduce the dimension projection direction 6, the data obtained after the dimensionality reduction )
- 2010-10-28 11:25:05下载
- 积分:1
-
MATLAByidongfangzhen
无线损耗的模型仿真,不错的程序,可以看下(Wireless loss model simulation, a good program, you can see next)
- 2010-05-27 18:01:18下载
- 积分:1
-
XIAOBO
关于db4小波数据处理的一个简单matlab程序,可供大家参考参考(simple matlab test of xiaobo,it is expected hopleful)
- 2014-06-17 11:44:00下载
- 积分:1
-
suspension
对悬架系统进行分析 净MATLAB仿真得到控制器的结果(suspension system )
- 2009-11-21 10:39:53下载
- 积分:1
-
Package
matlab coade source for pattren for the simutalion of smart antenna in matlab
- 2011-01-27 16:53:31下载
- 积分:1
-
hilbert
说明: 希尔伯特变换实现函数,用于对时域信号进行处理,得到相位的变化!(Hilbert transform functions for the time-domain signal processing, get the phase change!)
- 2011-04-13 18:38:49下载
- 积分:1
-
MATLAB
文件为关于如何用matlab进行编程的资料,用matlab编程比用C语言更容易,它的矩阵运算能力更是c语言难以比拟的。再用matlab编程时,一些小细节忘了可以看一下此文件。(Document on how to use matlab to program information with the matlab programming language more easily than C, its capacity is c matrix operations can not match the language. Then matlab programming, forgot some small details can look at this file.)
- 2011-05-14 10:14:50下载
- 积分:1
-
fx-demo
主要用于地震资料处理波数频率域反褶积和图像显示(Mainly used for seismic data processing and wave-number frequency domain deconvolution and image display)
- 2012-06-09 16:10:59下载
- 积分:1
-
vectors-as-arrays
matlab - vectors as arrays
- 2013-05-22 03:42:40下载
- 积分:1
-
1807.01622
说明: 深度神经网络在函数近似中表现优越,然而需要从头开始训练。另一方面,贝叶斯方法,像高斯过程(GPs),可以利用利用先验知识在测试阶段进行快速推理。然而,高斯过程的计算量很大,也很难设计出合适的先验。本篇论文中我们提出了一种神经模型,条件神经过程(CNPs),可以结合这两者的优点。CNPs受灵活的随机过程的启发,比如GPs,但是结构是神经网络,并且通过梯度下降训练。CNPs通过很少的数据训练后就可以进行准确的预测,然后扩展到复杂函数和大数据集。我们证明了这个方法在一些典型的机器学习任务上面的的表现和功能,比如回归,分类和图像补全(Deep neural networks perform well in function approximation, but they need to be trained from scratch. On the other hand, Bayesian methods, such as Gauss Process (GPs), can make use of prior knowledge to conduct rapid reasoning in the testing stage. However, the calculation of Gauss process is very heavy, and it is difficult to design a suitable priori. In this paper, we propose a neural model, conditional neural processes (CNPs), which can combine the advantages of both. CNPs are inspired by flexible stochastic processes, such as GPs, but are structured as neural networks and trained by gradient descent. CNPs can predict accurately with very little data training, and then extend to complex functions and large data sets. We demonstrate the performance and functions of this method on some typical machine learning tasks, such as regression, classification and image completion.)
- 2020-06-23 22:20:02下载
- 积分:1