#figure270#
Figure 1: A feedforward neural net consisting of an input layer
of m ``fan-out'' nodes, a single hidden layer of n
computational nodes, and an output layer of u computational nodes.
Each computational node carries a #tex2html_wrap_inline2553# activation function and
#tex2html_wrap_inline2555# threshold.
Hector Sussmann [#8##1#] examined network reducibility and
equivalence for the input-output mappings of a neural net. The net
structure considered by [#8##1#] is a feedforward net
with single hidden layer whose nodes use a hyperbolic
tangent activation function (#tex2html_wrap_inline2557#). The units of the output layer
also use #tex2html_wrap_inline2559# activations.
In addition, Sussman proved what is called the <#278#>
uniqueness theorem<#278#> in the framework of feedforward net structures.
It is stated that <#885#> two irreducible feedforward nets with the same
input-output mappings are related by a transformation in #tex2html_wrap_inline2561#<#885#>,
where #tex2html_wrap_inline2563# is a discrete symmetric group of nets as originally
proposed by Hecht-Nielsen [#10##1#].
The main point in [#8##1#] is <#283#> to show that two
input-output equivalent nets must necessarily be the same net, up to
some simple internal symmetries<#283#> (more on this later). Also, it was
pointed out that the irreducibility condition stated in the <#284#>
uniqueness theorem<#284#> is needed because there is another source of
non-uniqueness coming from the fact that a net may contain nodes that
make no contribution whatsoever to the output. This means that there
are cases when a net can be reduced without changing the I-O map, so
the above theorem can hold only for irreducible nets.
One major result of Sussmann's theorem is that the I-O mapping of an
irreducible net can not be obtained from another net with fewer hidden
nodes.