Given a feedforward network , we associate a linear affine function
with each node of the hidden layer as follows:
#equation1599#
Let #tex2html_wrap_inline2655# and #tex2html_wrap_inline2657# on #tex2html_wrap_inline2659# be linear affine functions;
if either #tex2html_wrap_inline2661# or #tex2html_wrap_inline2663#, for all
#tex2html_wrap_inline2665#, then #tex2html_wrap_inline2667# and #tex2html_wrap_inline2669# are sign-equivalent.
#tex2html_wrap_inline2671#
Let #tex2html_wrap_inline2673# be the weight between node i of the input layer and
node j of the hidden layer, and #tex2html_wrap_inline2679# be the weight between
node j of the hidden layer and node k of the output layer.
As outlined in [#8##1#], a feedforward network is reducible
if one or more of the following conditions hold:
-
For some #tex2html_wrap_inline2685# and all #tex2html_wrap_inline2687#,
#tex2html_wrap_inline2689#.
-
There exist two different indices #tex2html_wrap_inline2691# such
that the functions #tex2html_wrap_inline2693# and #tex2html_wrap_inline2695# are
sign-equivalent (i.e. #tex2html_wrap_inline2697#).
-
One of the functions #tex2html_wrap_inline2699# , for #tex2html_wrap_inline2701#, is
constant.
Next we briefly examine each of the above three cases.
-
<#369#> (First case)<#369#>:
Node j, of the hidden layer, and all its connections can be
removed without any effect on the network output.
-
<#370#> (Second case)<#370#>: Let #tex2html_wrap_inline2705# and #tex2html_wrap_inline2707#. Since #tex2html_wrap_inline2709# is an odd function, #tex2html_wrap_inline2711#.
Remove node #tex2html_wrap_inline2713# and all its connections, then change the output
weights for #tex2html_wrap_inline2715# to:
#equation1690#
The above modification will not alter the original I-O mapping.
-
<#380#> (Third case)<#380#>: In this case all the weights between the input
layer and node j are zero, so node j is removed and thresholds of
the output nodes are properly adjusted.