You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 19, 2020. It is now read-only.
I used the parallel/symmetric approach (not deflation) and the kurtosis as contrast function. I found that the Matlab and Python implementations converged within less than 100 steps for this data, while the Accord.NET implementation did not converge within 1000 steps.
I looked a bit more into this and the main culprit seems to be the convergence check (getMaximumAbsoluteChange function in IndependentComponentAnalysis.cs) which differs from the Matlab and Python implementations. In Accord.NET it takes the entry in the demixing matrix W with the maximum absolute difference to the demixing matrix W0 of the last step, calls this difference delta and does this comparison: delta < tolerance * lastChange where tolerance is 0.001 by default and lastChange is the delta found in the last step. Does this work as intended? In essence, the maximum absolute difference would need to change by a factor of 1000 from one step to the next which does not really make sense to me...
The Matlab and Python implementations instead do this comparison (here in Matlab-ish pseudo-code): 1 - min(abs(diag(W * W0'))) < tolerance
If I modify the IndependentComponentAnalysis to check for convergence in the same way, it converges fine on my test data.
It would be nice to get a fix for this convergence check, or maybe just better default parameters? The Accord.NET framework looks great otherwise, but this convergence issue prohibits me from using it.