A smart physics person's review of my "Supersymmetric Artificial Neural Network"

56295347_652774561848240_3433504093165846528_n

A physics person from the University of Queensland and I, are working together to further fulfill the “Supersymmetric Artificial Neural Network”:

The person’s paper/presentation (See section 5.1 in the OpenReview item below, to see brief discussion about my Supersymmetric Artificial Neural Network, although reading from the start is perhaps better):
Open Review : “Applications of Super-mathematics to Machine learning”

The person’s background:
Physics Stack Exchange : Mitchell Porter

1 Like

Supersymetrical weightings are deviations from the center?

Dumb question: if it’s supersymetrical how does it process risk? Two supersymetrical outputs compared, perhaps between layers?

Well, optimally, a sensible machine learning researcher would want his/her answers to vary beyond the origin, that’s why we use things like biases in our hypersurfaces.
Here’s an answer of mine from a while back.

1 Like

Hey Joe.

The supersymmetric artificial neural network hypothesis does not seek to concern explicit methods to contact risk. Its main aim is to enable richer degrees of freedom in artificial learning neural networks, which may perhaps be better vessels to capture risk.

Picture how learning models did better as better numbers were used; from real number based neural nets, to real number based nets with convolutions, to complex number based neural nets, and so on. The supersymmetric ann is yet another reasonable way to represent more data from the inputs space.

1 Like

I’m not sure I grasp what you’re trying to ask in the question above.

Ok, thank you. so this will allow more freedom/ granularity in prediction of general centers/intersections from origins when using AI?

Any thoughts on (full) stack GANS?

Greetings. I would have to do more search before I could answer properly.

Salutations… I have recently (re)read the Alexey Potapov blog from April. Specialist vocabulary is a big challenge…