The Reducibility Conditions

Given a feedforward network , we associate a linear affine function with each node of the hidden layer as follows:

#equation1599#

Let #tex2html_wrap_inline2655# and #tex2html_wrap_inline2657# on #tex2html_wrap_inline2659# be linear affine functions; if either #tex2html_wrap_inline2661# or #tex2html_wrap_inline2663#, for all #tex2html_wrap_inline2665#, then #tex2html_wrap_inline2667# and #tex2html_wrap_inline2669# are sign-equivalent. #tex2html_wrap_inline2671# Let #tex2html_wrap_inline2673# be the weight between node i of the input layer and node j of the hidden layer, and #tex2html_wrap_inline2679# be the weight between node j of the hidden layer and node k of the output layer. As outlined in [#8##1#], a feedforward network is reducible if one or more of the following conditions hold:

  1. For some #tex2html_wrap_inline2685# and all #tex2html_wrap_inline2687#, #tex2html_wrap_inline2689#.
  2. There exist two different indices #tex2html_wrap_inline2691# such that the functions #tex2html_wrap_inline2693# and #tex2html_wrap_inline2695# are sign-equivalent (i.e. #tex2html_wrap_inline2697#).
  3. One of the functions #tex2html_wrap_inline2699# , for #tex2html_wrap_inline2701#, is constant.
Next we briefly examine each of the above three cases.