-
cgls
用于解反问题的共轭梯度法,对于Ax=b,输入矩阵A,列向量b,以及迭代步数k,可求的列向量x(Solution of inverse problems for the conjugate gradient method, for Ax = b, the input matrix A, the column vector b, as well as the number of iterations k, rectifiable column vector x)
- 2009-09-04 10:52:50下载
- 积分:1
-
IP
说明: matlab一般不能解整数规划,现提供一个matlab编写的解整数规划的函数。(matlab solutions of integer programming in general should not now prepared to provide a solution matlab a function of integer programming.)
- 2008-06-02 13:40:53下载
- 积分:1
-
sim_2_16
根据信号的时域表达式,在matlab中求出他的自相关函数,并绘画出来(correlation spectrum)
- 2014-10-18 08:46:50下载
- 积分:1
-
gaborfilter
gabor filter which will image as input and some parameters as gabor
- 2011-11-17 14:25:19下载
- 积分:1
-
Programas
Grafica helice, creada en matlab es una grafica de una funcion que hace que una forma de heclice es muy practica y divertida
- 2010-10-08 07:19:22下载
- 积分:1
-
Untitled
这个代码是用Matlab写的心形代码,是通过点的形式编写的,看起来很有意思。(This code is written using Matlab heart-shaped code, through the form of written, looks very interesting.)
- 2013-08-29 09:21:33下载
- 积分:1
-
untitled
说明: 基于PID控制的永磁同步直线电机速度控制系统,输出速度能够较好的跟踪输入信号,控制效果比较理想(The speed control system of permanent magnet synchronous linear motor based on PID control can track the input signal better, and the control effect is ideal.)
- 2019-04-24 21:43:31下载
- 积分:1
-
cetin_DSS04
一种针对稀疏孔径的合成孔径雷达的区域加强成像算法(Region-Enhanced Imaging for Sparse-Aperture Passive Radar)
- 2012-06-01 15:52:28下载
- 积分:1
-
06310047
Quantitative Image Recovery From Measured Blind
Backscattered Data Using a Globally
Convergent Inverse Method
- 2013-07-09 10:01:04下载
- 积分:1
-
WindyGridWorldQLearning
Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian
domains. It amounts to an incremental method for dynamic programming which imposes limited computational
demands. It works by successively improving its evaluations of the quality of particular actions at particular states.
This paper presents and proves in detail a convergence theorem for Q,-learning based on that outlined in Watkins
(1989). We show that Q-learning converges to the optimum action-values with probability 1 so long as all actions
are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions
to the cases of non-discounted, but absorbing, Markov environments, and where many Q values can be changed
each iteration, rather than just one.
- 2013-04-19 14:23:35下载
- 积分:1