作业帮 > 英语 > 作业

英语翻译Learning in such a network proceeds the same way as for

来源:学生作业帮 编辑:作业帮 分类:英语作业 时间:2024/05/06 23:26:22
英语翻译
Learning in such a network proceeds the same way as for perceptrons:example inputs are presented to the network,and if the network computes an output vector that matches the target,nothing is done.If there is an error (a difference between the output and target),then the weights are adjustec to reduce this error.The trick is to assess the blame for an error and divide it among the contributing weights.In perceptrons,this is easy because there is only one weight between each input and the output.But in multilayer networks.There are many weights connecting each input to an output,and each of these weights contributes to more than one output.
在这样一个网络学习收益,感知器相同的方式:例如输入提交给网络,如果网络计算的输出向量相匹配的目标,不采取任何行动.如果有一个错误(一产出和目标之间的差异),则权重adjustec减少这种误差.诀窍是评估错误引咎鸿沟在造成重了.在感知,这很容易,因为只有一间每个输入和输出的重量.但在多层网络.有连接每个输入到输出许多重量,而这些重量每有助于多个输出.
The back-propagation algorithm is a sensibly approach to dividing the contribution of each weight.As in the perceptron learning algorithm,we try to minimize the error between each target output and the output actually computed by the network.At the output layer,the weight update rule is very similar to the rule for the perceptron.There are two differences:the activation of the hidden unit aj is used instead of the input value; and the rule contains a term for the gradient of the activation function.If Erri is the error (Ti-Oi) at the output node,then the weight update rule for the link from unit j to unit i is
反向传播算法是一种明智的方法来划分,每个重量的贡献.正如在感知学习算法,我们尽量减少各目标之间的输出和实际的网络计算的输出错误.在输出层,重量更新规则非常类似的感知规则.有两点不同:隐藏的单元欧塞尔激活,而不是输入值使用;和规则包含了激活功能梯度的一个术语.如果Erri是在输出节点错误(钛爱),那么从单位重量的链接j到我单位更新规则
Wj,i
看到BP了,如果不出所料应该是神经网络的内容
不要相信翻译软件,帮你重翻了一遍:
术语:
weight 权重
hidden unit 隐层
Learning in such a network proceeds the same way as for perceptrons: example inputs are presented to the network, and if the network computes an output vector that matches the target, nothing is done. If there is an error (a difference between the output and target), then the weights are adjustec to reduce this error. The trick is to assess the blame for an error and divide it among the contributing weights. In perceptrons, this is easy because there is only one weight between each input and the output. But in multilayer networks. There are many weights connecting each input to an output, and each of these weights contributes to more than one output.
在这样一个网络进程中的学习就类似于感知器:输入样本被送达到网络中,如果网络计算的输出向量与目标相匹配,则不采取任何行动.如果存在误差(即输出与目标之间有差值),那么则需要调整权重来减小这个误差.这里的窍门 找出误差的来源并且将在有涉及的权重中划分出来.在感知器中,这种坐法是十分简单的,因为每个输入到其对应输出之间只有一个权重.但是在多层网络中,输入到输出之间往往连接着许多个权重,而且每个权重都连向不止一个的输出.
The back-propagation algorithm is a sensibly approach to dividing the contribution of each weight. As in the perceptron learning algorithm, we try to minimize the error between each target output and the output actually computed by the network. At the output layer, the weight update rule is very similar to the rule for the perceptron. There are two differences: the activation of the hidden unit aj is used instead of the input value; and the rule contains a term for the gradient of the activation function. If Erri is the error (Ti-Oi) at the output node, then the weight update rule for the link from unit j to unit i is
反向传播算法是一种划分有关的权重的简单方法.正如我们在感知学习算法中所做的,我们尽量减少目标输出与实际计算输出之间误差.在输出层中,权重矫正规则则和感知器中的规则十分相似,他们之间只有2个不同之处:1 隐层的激励用了aj而非输入值 2 规则中包含了激励函数的梯度. 如果Erri代表输出节点的Ti和Oi之间的误差,则从j到i的链接的权重矫正规则为:
Wj,i