-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Allow larger datastore dumps #3344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👍 Thanks for this! |
In the original code it seems like people could define the limit on the request (eg I just want the first 10 rows). If I'm understanding your changes correctly people now will always get all the rows regardless of |
You're right. Are those parameters documented? (Do we want to keep them?) The way they were written made it easy to consume all the memory on the web server if a user requests enough rows. I can add them back but I would like to document them at least. |
There should probably be a hard limit either way, with params or without, even if it's a big one (1m ?) |
I agree, a Let me just add those options back here in case someone is using them. On another topic: Should I be including a UTF-8 BOM here so that Excel can open these files? It would be a user-visible change. Add an option for that at the same time? |
I personally would not add it by default. I'm happy with an option to enable it optionally. |
There's no reason for a hard limit on this call. If the client keeps reading the data this code will continue looping to the next 10k records until the end. There's no benefit in making the client send multiple requests. |
Fair enough, we'll assume there are no memory leaks there :) |
@wardi regarding incompatibility with the existing api, we could support |
base.abort(404, p.toolkit._('DataStore resource not found')) | ||
|
||
if not wr: | ||
pylons.response.headers['Content-Type'] = 'text/csv' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Content-Type: text/csv; charset=utf-8
?
This is a small improvement to datastore that allows large CSV dumps and shrinks the maximum memory used by paging over the data and streaming the output CSV