登录
首页 » Others » 基于halcon的图像拼接算法

基于halcon的图像拼接算法

于 2020-12-04 发布
0 280
下载积分: 1 下载次数: 1

代码说明:

基于halcon的图像拼接算法,从算法流程到算法实践,我自己写的,是很得意的东西。

下载说明:请别用迅雷下载,失败请重下,重下不扣分!

发表评论

0 个回复

  • 2-轮毂轮盘装配体螺栓预紧接触有限元分析
    2-轮毂轮盘装配体螺栓预紧接触有限元分析 受拉紧螺栓联接是机械结构中应用最广泛的联接方式。绝大多数螺纹联接在装配时都必须拧紧,使联接在承受工作载荷之前,预先受到力的作用。这个预加作用力称为预紧力 。预紧的目的在于增强联接的可靠性和紧密性,以防止受载后被联接件间出现缝隙或相对滑移 。对于有螺栓联接的机械结构有限元分析,如何将其施加于模型上,较好地模拟螺栓受力情况达到在有限元分析中的准确加载并得到正确的分析结果,有一定难度。
    2019-11-09下载
    积分:1
  • allan方差的MATLAB实现
    大部分是allan方差的MATLAB实现,11种,全死网上下的,仅供参考
    2020-12-06下载
    积分:1
  • Jieba0.35中文分词组件
    Jieba是一个中文分词组件,可用于中文句子/词性分割、词性标注、未登录词识别,支持用户词典等功能。该组件的分词精度达到了97%以上。
    2020-12-10下载
    积分:1
  • 电子病历源码纯c#开发
    实用电子病历,稍改即可商业用途!纯C#开发,值得大家参考
    2020-12-05下载
    积分:1
  • 基于sharpmap的雨量等值线显示源码+
    基于sharpmap的GIS技术的form程序,虽然已经没有多少人用from了!以全国省级和河流3级为GIS底图,SHP格式以三角网实现等值线的跟踪显示功能:分层显示,自由缩放,鼠标推动移动,A3、A4打印输出适合于水文气象雨量水情的显示和打印。本例数据读《实时雨水情数据库表结构与标识符标准》2005版数据库
    2020-12-05下载
    积分:1
  • 狼群与萤火虫群优化算法及其应用研究
    提出一种离散型的人群优化算法,该算法采用基于工件次序的编码方式,为了提高算法在置换流水车间调度问题的优化性能,并结合了简化领域搜索算法,并成功的应用在置换流水调度问题。
    2021-05-06下载
    积分:1
  • 基于MATLAB的ask,bpsk,epsk,fsk,ook,qpsk调制解调代码
    基于MATLAB的ask,bpsk,epsk,fsk,ook,qpsk调制解调代码,用MATLAB实现通信原理中一些基本调制解调方法,希望对大家有用。
    2021-05-06下载
    积分:1
  • ORCAD 元件库
    ORCAD 元件库 是各大公司网站上的
    2020-12-06下载
    积分:1
  • 强烈推荐:完整的C#实例源代码之 固定资产管理系统
    完整的C#实例源代码系列(内附详细说明文档)之:固定资产管理系统适合C#和SQL2005入门者练习。 内附数据库安装录像(FLASH格式)应网友要求,因为原来发布的资源点数过高,不适合CSDN新人下载,而CSDN又无法修改资源的点数,故该系列共8套系统全部免费再次发布,如下:财务凭证管理系统房屋中介系统高校教师档案管理系统固定资产管理系统库存管理系统企业客户资源管理系统商品进销存报表系统网上商城购物系统所有这8套系统都可以在我的资源里找到。
    2020-12-06下载
    积分:1
  • 稀疏自码深度学习的Matlab实现
    稀疏自编码深度学习的Matlab实现,sparse Auto coding,Matlab codetrain, m/7% CS294A/CS294W Programming Assignment Starter CodeInstructions%%%This file contains code that helps you get started ontheprogramming assignment. You will need to complete thecode in sampleIMAgEsml sparseAutoencoder Cost m and computeNumericalGradientml For the purpose of completing the assignment, you domot need tochange the code in this filecurer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencodtrain.m∥%%========%6% STEP 0: Here we provide the relevant parameters valuesthat willl allow your sparse autoencoder to get good filters; youdo not need to9 change the parameters belowvisibleSize =8*8; number of input unitshiddensize 25number of hidden unitssparsity Param =0.01; desired average activation ofthe hidden units7 (This was denoted by the greek alpharho, which looks like a lower-case pcurer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod4/57train.,m∥in the lecture notes)1 ambda=0.0001%o weight decay parameterbeta 3%o weight of sparsity penalty term%%==:79 STEP 1: Implement sampleIMAGESAfter implementing sampleIMAGES, the display_networkcommand shouldfo display a random sample of 200 patches from the datasetpatches sampleIMAgES;display_network(patches(:, randi(size(patches, 2), 204, 1)), 8)%为产生一个204维的列向量,每一维的值为0~10000curer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod5/57train.m/v%中的随机数,说明是随机取204个 patch来显示%o Obtain random parameters thetatheta= initializeParameters ( hiddenSize, visibleSize)%%=============三三三三====================================97 STEP 2: Implement sparseAutoencoder CostYou can implement all of the components (squared errorcost, weight decay termsparsity penalty) in the cost function at once, butit may be easier to do%o it step-by-step and run gradient checking (see STEP3 after each stepWecurer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod6/57train. m vb suggest implementing the sparseAutoencoder Cost functionusing the following steps(a) Implement forward propagation in your neural networland implement the%squared error term of the cost function. Implementbackpropagation tocompute the derivatives. Then (using lambda=beta=(run gradient Checking%to verify that the calculations corresponding tothe squared error costterm are correctcurer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod7/57train. m vl(b) Add in the weight decay term (in both the cost funcand the derivativecalculations), then re-run Gradient Checking toverify correctnessl (c) Add in the sparsity penalty term, then re-run gradiChecking toverify correctnessFeel free to change the training settings when debuggingyour%o code. (For example, reducing the training set sizecurer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod8/57train m vl/number of hidden units may make your code run fasterand setting betaand/or lambda to zero may be helpful for debuggingHowever, in yourfinal submission of the visualized weights, please useparameters web gave in Step 0 abovecoS七grad]sparseAutoencoderCost(theta, visibleSize,hiddensize, lambda,sparsityParam, beta,patches)二〓二二二二二二二〓二〓二〓二〓=二====〓=curer:YiBinYUyuyibintony@163.com,WuYiUniversityning, MATLAB Code for Sparse Autoencod9/57train.m vlll96% STeP 3: Gradient CheckingHint: If you are debugging your code, performing gradienchecking on smaller modelsand smaller training sets (e. g, using only 10 trainingexamples and 1-2 hiddenunits) may speed things upl First, lets make sure your numerical gradient computationis correct for a%o simple function. After you have implemented computeNumerun the followingcheckNumericalGradientocurer:YiBinYUyuyibintony@163.com,WuYiUniversityDeep Learning, MATLAB Code for Sparse Autoencode10/57
    2020-12-05下载
    积分:1
  • 696516资源总数
  • 106409会员总数
  • 8今日下载