Skip to content

Avoiding large daq files #1164

@mariorl

Description

@mariorl

Hi, I'm facing problems with large .db DAQ files, because when a bit complicated process needs to be recorded, it generates db files of an about 45MB or so. I've noticed dealing with these big json sets when try to show on a trend is slow, and deals to out-of-memory browser error, and eventually the whole Fuxa node process fail.

One way to dramatically reduce the amount of data to be recorded could be just by setting a treshold in the Tag Options dialog. By that, you can set a large datalogging period of time, for example one every 60 second, and let the crossing of the treshold be the trigger for an extra datalog.

I'm very experienced with these techniques and as an advance, I can tell you this kind of triggering (by threshold) could rise other problem, a very jerky signal from a broken sensor for example, or a electrically noisy enviroment, will trigger the datalogging many times. In these cases I usually use a measure rejection based on comparing with the previous. But this second part of the signal conditioning is not needed most of times.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions