Well, optimally, a sensible machine learning researcher would want his/her answers to vary beyond the origin, that’s why we use things like biases in our hypersurfaces. Here’s an answer of mine from a while back.
The supersymmetric artificial neural network hypothesis does not seek to concern explicit methods to contact risk. Its main aim is to enable richer degrees of freedom in artificial learning neural networks, which may perhaps be better vessels to capture risk.
Picture how learning models did better as better numbers were used; from real number based neural nets, to real number based nets with convolutions, to complex number based neural nets, and so on. The supersymmetric ann is yet another reasonable way to represent more data from the inputs space.