登录
首页 » Others » S函数的RBF神经网络PID控制器Simulink仿真

S函数的RBF神经网络PID控制器Simulink仿真

于 2020-12-04 发布
0 194
下载积分: 1 下载次数: 10

代码说明:

S函数的RBF神经网络PID控制器Simulink仿真

下载说明:请别用迅雷下载,失败请重下,重下不扣分!

发表评论

0 个回复

  • JPEG源码(C语言实现)
    JPEG源码(C语言实现),包括编码和解码部分
    2021-05-07下载
    积分:1
  • 信道仿真MATLAB
    产生 Nakagami Rayleigh 等信道的MATLAB仿真程序
    2020-06-26下载
    积分:1
  • 通信原理动画——全了
    包含2ASK 2PSK 2DPSK 2FSK MSK等的调制解调 部分响应系统 抽样定理 无码间串扰条件 眼图形成的动画演示 SWF格式,简明易了,使用灵活。
    2020-07-02下载
    积分:1
  • 多机器人路径规划算法
    多机器人路径规划算法,可视化界面显示,默认实验数据为实现4个机器人路径规划
    2020-12-06下载
    积分:1
  • DOE试验设计(SAS_JMP)经典学习案例
    DOE ,即 试验设计 (Design Of Experiment) ,是研究和处理 多因子 与响应变量关系的 一种科学方法。它通过合理地挑选试验条件,安排试验,并通过对试验数据的分析,从而找 出总体最优的改进方案。从上个世纪 20 年代费雪 (Ronald Fisher) 在农业试验中首次提出 DO E的概念,到六西格玛管理在世界范围内的蓬勃发展, DOE 已经历了 80 多年的发展历程, 在学术界和企业界均获得了崇高的声誉。
    2021-05-06下载
    积分:1
  • 图像增强方法的研究与实现
    在图像处理中,图像增强技术对于提高图像的质量起着重要的作用"它通过有选择地强调图像中某些信息而抑制掉另一些信息,以改善图像的视觉效果,将原图像转换成一种更适合于人眼观察和计算机进行分析处理的形式"本文着重对图像增强方法中的灰度变换!直方图均衡化!模糊增强进行了深入的研究,针对增强过程中遇到的一些问题,提出了相应的解决方法"对于分段线性变换方法中如何划分灰度区间进行变换这一关键问题,给出了基于区域分割的分段线性变换方法,加快了调整灰度区间的过程,提高了算法的执行效率"为了适应图像的局部亮度特性,给出了基于抛物线调整的直方图均衡化方法,可以调节图像的明暗程度,增强区域的对比度,同时给出了一种选
    2020-12-02下载
    积分:1
  • 基于FPGA用VHDL语言设计的四位共阴数码管显示驱动电路设计
    基于FPGA用VHDL语言设计的四位共阴数码管显示驱动电路设计
    2020-12-09下载
    积分:1
  • Rosenfeld细化算法,针对汉字二值图像
    该方法使用Rosenfeld 细化算法,注释详细,阅读方便。
    2020-12-05下载
    积分:1
  • 稀疏自码深度学习的Matlab实现
    稀疏自编码深度学习的Matlab实现,sparse Auto coding,Matlab codetrain, m/7% CS294A/CS294W Programming Assignment Starter CodeInstructions%%%This file contains code that helps you get started ontheprogramming assignment. You will need to complete thecode in sampleIMAgEsml sparseAutoencoder Cost m and computeNumericalGradientml For the purpose of completing the assignment, you domot need tochange the code in this filecurer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencodtrain.m∥%%========%6% STEP 0: Here we provide the relevant parameters valuesthat willl allow your sparse autoencoder to get good filters; youdo not need to9 change the parameters belowvisibleSize =8*8; number of input unitshiddensize 25number of hidden unitssparsity Param =0.01; desired average activation ofthe hidden units7 (This was denoted by the greek alpharho, which looks like a lower-case pcurer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod4/57train.,m∥in the lecture notes)1 ambda=0.0001%o weight decay parameterbeta 3%o weight of sparsity penalty term%%==:79 STEP 1: Implement sampleIMAGESAfter implementing sampleIMAGES, the display_networkcommand shouldfo display a random sample of 200 patches from the datasetpatches sampleIMAgES;display_network(patches(:, randi(size(patches, 2), 204, 1)), 8)%为产生一个204维的列向量,每一维的值为0~10000curer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod5/57train.m/v%中的随机数,说明是随机取204个 patch来显示%o Obtain random parameters thetatheta= initializeParameters ( hiddenSize, visibleSize)%%=============三三三三====================================97 STEP 2: Implement sparseAutoencoder CostYou can implement all of the components (squared errorcost, weight decay termsparsity penalty) in the cost function at once, butit may be easier to do%o it step-by-step and run gradient checking (see STEP3 after each stepWecurer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod6/57train. m vb suggest implementing the sparseAutoencoder Cost functionusing the following steps(a) Implement forward propagation in your neural networland implement the%squared error term of the cost function. Implementbackpropagation tocompute the derivatives. Then (using lambda=beta=(run gradient Checking%to verify that the calculations corresponding tothe squared error costterm are correctcurer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod7/57train. m vl(b) Add in the weight decay term (in both the cost funcand the derivativecalculations), then re-run Gradient Checking toverify correctnessl (c) Add in the sparsity penalty term, then re-run gradiChecking toverify correctnessFeel free to change the training settings when debuggingyour%o code. (For example, reducing the training set sizecurer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod8/57train m vl/number of hidden units may make your code run fasterand setting betaand/or lambda to zero may be helpful for debuggingHowever, in yourfinal submission of the visualized weights, please useparameters web gave in Step 0 abovecoS七grad]sparseAutoencoderCost(theta, visibleSize,hiddensize, lambda,sparsityParam, beta,patches)二〓二二二二二二二〓二〓二〓二〓=二====〓=curer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod9/57train.m vlll96% STeP 3: Gradient CheckingHint: If you are debugging your code, performing gradienchecking on smaller modelsand smaller training sets (e. g, using only 10 trainingexamples and 1-2 hiddenunits) may speed things upl First, lets make sure your numerical gradient computationis correct for a%o simple function. After you have implemented computeNumerun the followingcheckNumericalGradientocurer:YiBinYUyuyibintony@163.com,WuYiUniversityDeep Learning, MATLAB Code for Sparse Autoencode10/57
    2020-12-05下载
    积分:1
  • 大规模MIMO的容量算法优化算法研究
    大规模MIMO的容量算法优化研究,可以对MIMO的容量进行优化算法研究。
    2020-12-10下载
    积分:1
  • 696518资源总数
  • 105873会员总数
  • 12今日下载