-
distance
LTE分小区模型 并且计算各个用户和基站之间的距离(the cell model of LTE)
- 2012-05-02 19:50:02下载
- 积分:1
-
mseries
产生任意长度的M序列,并计算该M序列的自相关函数和功率谱密度函数。用于系统辨识(Generated sequence of random length M, and calculate the M series autocorrelation and power spectral density function. For system identification)
- 2010-05-12 01:53:38下载
- 积分:1
-
colorbar
用于设置matlab中不同类型的colormap(to set diffrent colormap in matlab )
- 2013-04-13 16:02:12下载
- 积分:1
-
matlab
用matlab编的一个用于对统计数据进行威布尔分布的估计,并对其进行三参数的评估与计算(Matlab compiled a for the statistical data of weibull distribution estimates, and the three parameters calculation and evaluation)
- 2021-03-08 15:29:28下载
- 积分:1
-
c5
说明: CHAPTER 5 (C5 Folder) BASICS OF ELECTRIC MACHINES AND
TRANSFORMATION
Project 1: QD0 Transformation of Network Components
SIMULINK file none
MATLAB M-file none
Project 2: Space Vectors
SIMULINK file s2.mdl
MATLAB M-file m2.m (also used by masked block)
Project 3: Sinusoidal and Complex Quantities in QD0
SIMULINK file s3.mdl
MATLAB M-file m3.m (also used by masked block)
- 2013-01-18 17:04:51下载
- 积分:1
-
INVERTERPWMRL
inverter with RL load and given as a pulse width modulus (PWM) at all gates of the circuit.
- 2013-11-11 23:24:02下载
- 积分:1
-
fir2iir
IIR filter synthesis algorithm with almost linear group delay
- 2015-02-23 14:01:45下载
- 积分:1
-
DIGITAL_FILTER_GUI
数字滤波仿真程序,供大家参考!希望对大家学习有用!(digital filter simulation program, for your reference! We hope to learn useful!)
- 2007-04-05 13:37:12下载
- 积分:1
-
Creep_Analysis__Circular_Plate
基于一阶剪切变形板理论数值分析计算粘弹性功能梯度版蠕变(Based on first order shear deformation plate theory numerical analysis of functionally graded version of the calculation of viscoelastic creep)
- 2011-09-19 17:16:19下载
- 积分:1
-
WindyGridWorldQLearning
Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian
domains. It amounts to an incremental method for dynamic programming which imposes limited computational
demands. It works by successively improving its evaluations of the quality of particular actions at particular states.
This paper presents and proves in detail a convergence theorem for Q,-learning based on that outlined in Watkins
(1989). We show that Q-learning converges to the optimum action-values with probability 1 so long as all actions
are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions
to the cases of non-discounted, but absorbing, Markov environments, and where many Q values can be changed
each iteration, rather than just one.
- 2013-04-19 14:23:35下载
- 积分:1