-
econdispatchlongcode
economic dispatch using lagrange method
- 2012-05-28 13:06:18下载
- 积分:1
-
mu_autoland_synthesis
高超飞行器纵向三自由度模型,可用于线性化研究,进行姿态控制设计(Hypersonic vehicle longitudinal three-degree-of-freedom model can be used for linearization study, attitude control design)
- 2013-05-10 07:28:00下载
- 积分:1
-
optics
optics密度聚类源码,简单容易理解,止血药修改ReadTxt.m中的load的传入,就可以直接运行。(Called density clustering source, simple and easy to understand, hemostatic changes ReadTxt. M of the introduction of the load, can be directly run.)
- 2020-11-08 10:19:47下载
- 积分:1
-
adaptivePboostingPATR
说明: 一片详细介绍adaboost进行目标识别的文章。采用MSTAR数据库作为数据来源,非常经典的adaboost实现(A detailed article adaboost target recognition. MSTAR database as the data source used, to achieve a very classic adaboost)
- 2011-03-28 09:01:41下载
- 积分:1
-
Comm_system
Communication channel + TX + RX design for picture transfer
- 2013-08-10 01:01:16下载
- 积分:1
-
prova
matlab script for speech recognition. in via of improvment
- 2011-10-22 19:22:42下载
- 积分:1
-
CMA
盲均衡算法实现的有噪声信道下的4QAM调制解调,并画出算法的收敛图(Blind equalization algorithm of noisy channel under the 4QAM modulation and demodulation, and draw the convergence map)
- 2021-04-24 14:48:47下载
- 积分:1
-
fast Fractional Fourier Transform
说明: 实现信号的frft变换,傅立叶变换是将观看角度从时域转变到频域,分数阶傅立叶变换就是以观看时频面的角度去旋转时频面的坐标轴,然后再从观察频域的角度去分析信息。(The FRFT transform of signal is realized)
- 2020-08-07 09:06:33下载
- 积分:1
-
kechengsheji
基于matlab的电力系统课程设计源代码,为配网潮流计算(Matlab-based curriculum design in power system source code, for the distribution network power flow calculation)
- 2009-10-29 17:09:21下载
- 积分:1
-
WindyGridWorldQLearning
Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian
domains. It amounts to an incremental method for dynamic programming which imposes limited computational
demands. It works by successively improving its evaluations of the quality of particular actions at particular states.
This paper presents and proves in detail a convergence theorem for Q,-learning based on that outlined in Watkins
(1989). We show that Q-learning converges to the optimum action-values with probability 1 so long as all actions
are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions
to the cases of non-discounted, but absorbing, Markov environments, and where many Q values can be changed
each iteration, rather than just one.
- 2013-04-19 14:23:35下载
- 积分:1