You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 19, 2020. It is now read-only.
I have been using the Visual Bag of Words to identify different types of standard scanned documents (the goal being to sort the 4 different with the possibility of using this classification to look for specific data within these document.
I have found an issue when doing the classification on large pictures (about 4032 x 3024), the numbers involved overflow the GetRectangle function, I have look at the source code and changed the integers to 64 bit integer however these increased the memory usage by an extreme amount. Downscaling the image to smaller dimension does also fix the problem.
I will continue to test this however would there be a more efficient/accurate way of doing this I’m conscious I’m new to Machine Learning and may be using the wrong tool to do this
To replicate the issue take the Visual Bag of word example and expand one of the picture to 4032 x 3024 and attempt the compute the bag of words this will cause the error in question.