I'm trying to export large amount of data (~90000rows * 17columns) to excel file. However, after executing the script, no file is created at the location which I had specified.
I had tried
$cacheSettings = array( 'memoryCacheSize' => '64MB');
and
ini_set('memory_limit', '64M');
but none of these helps.
However, I manage to get the file created by reducing the number of columns.
I understand there are existing topic related to this question and I have gone through these topic but still couldn't find any solution to my problem.
Thank you.
Reading issues with packages using PHPExcel, it seems the best option is loading data in smaller chunks and appending to the same sheet.
Further reading on one of the issues
Update may 2017:
If you are using Laravel Excel, their 3.0 version is out with huge performance improvements. If a PHP 7.0 requirement is not a problem, it might be worth a look at.
Related
I am beginner in python and Flutter (4 months that i actually code) self-taught . There is a lot of information and I am facing a problem. I don't know how to go about it.
I am building a dictionary app in Flutter. I extracted all the words and definitions from the French Wiktionary with Scrapy and I processed all the data with pyspark. All data was inserted into an ObjectBox database with python and which has a final size of 460 mb for 355,000 elements. I compressed it with Brotli and its final size is 65 mb. So I end up with a compress mdb file. And now I'm stuck.
I thought of extracting the database to read it live with ObjectBox in Flutter but it does not read the mdb files directly and it gets complicated, I can't find any documentation on the subject. Moreover I tell myself that if the live extraction is not done on the internal memory, it will reserve ram? (If I understood correctly) This will not lead to a crash?
Or is it possible to extract it when installing the App so that objectBox can read it directly?
Or maybe I'm scratching my head over nothing. Directly read a Json but I'm afraid that the queries will be long because the word search is live for the user That this one writes a letter, my program must return me the words beginning with this one.
What would you do to maximize performance in the background ? Thanks for your help .
welcome on StackOverflow! :)
I think your question is a duplicate of this one:
How setup dart objectbox with a local database pre-populated?
as long as you refer to this NoSQL database by ObjectBox in both python and flutter and you're trying to use this package in the former and this in the latter.
Is that the case? Does the answer on the other question help you? If not, can you pls. elaborate what's missing/failing?
In general if you add a few links/references + narrow down the number of things/topics you ask might help (you can ask follow-up questions in comments later on answers or just post a new question if you still need infos). Not that I'm a pro here, could just make it easier for others to answer IMHO.
I am the lead developer on a project for a 'difficult' client. I will try not to bore anybody with the details but here is my issue I am facing.
Our client has a team of QA testers that are managing their project through JIRA. We currently have a fixed bid contract with them to supply them the software they requested at a fixed price and any additional features or pre-existing issues will be covered under time and materials.
They have taken the time to raise every defect within the system unrelated to the current fixed bid process and have tried to get them resolved for free and each time we have come to an agreement through JIRA comments that this is a preexisting issue/new feature and you will have to pay for it after the project has been completed which they have agreed to.
The issue is this client has a history of forgetting conversations and email trains that don't benefit them putting a lot of wasted time on our side digging up proof we agreed to handle a situation a specific way.
The project will not complete for several more weeks but as soon as it does I will likely be removed from the JIRA project by their administrator and they will begin asking again for us to complete all this additional work at no cost and I will lose access to the comments on each issue explaining to them it will not be free and them agreeing.
I am currently exporting each ticket after it closes but this is wasting about 30-40 minutes a day and would be interested if there is a tool out there that can export an entire JIRA project to a readable text format that I can run once near the project end.
TL:DR; Is there a tool that will allow me to export an entire JIRA project in a text readable format before I lose access to the project and all information included within that project
Export as CSV doesn't include comments and is limited to 1000 issues be default.
I have used the jira-python library to retrieve all issues, all fields, all comments from a single project. Missed the attachments though.
But what you have is a people problem more than a technical problem. Good luck!
Large exports (e.g. many hundreds of issues) are not recommended.
To change the number of issues that are exported, change the value of the tempMax parameter in the URL.
To export search results to Microsoft Excel:
Choose Issues > Search for Issues.
Refine your search, as described in Searching for Issues, then choose the Export menu.
Choose one of the following from the dropdown menu:
'Excel (All fields)'— this will create a spreadsheet column for every issue field (excluding comments).
Note: This will only show the custom fields that are available for all of the issues in the search results. For example, if a field is only available for one project and multiple projects are in the search results then that field will not appear in the Excel document. The same goes for fields that are only available for certain issue types.
'Excel (Current fields)' — this will create a spreadsheet column for the issue fields that are currently displayed in your Issue Navigator.
A file called - .xls will be created. Edit this file using Microsoft Excel and/or save it as required.
Is there a way to read Excel 2010/2013 files natively ?
We are importing Excel files into SQL Server and have come across a specific issue whereby it looks as though the Excel driver decides the type of a destination data column depends upon testing the contents of only the first 65K odd rows.
This has only just started happening within the past 3 weeks, before then we had managed to convince Excel of the error of its ways by a simple registry hack that forced it to read the entire set of rows.
The problem is that we have some datasets that contain, say 120,000 rows and these may have all numeric values for the first 80,000, then it will have some non-numeric yet vital information that we wish to retain.
Yes, the data is not correctly typed, we know.
Because the source data type has been determined by the Excel driver to be a float it promptly turns all our non-numeric values into NULLs - not very useful.
If there was some other way to read an Excel file not using the standard ODBC/OLEDB drivers that might help.
We have tried saving it into various other formats before importing but of course all these exports use the Excel driver which has the problem.
I think the closest we have got is to save it as XML (which is frankly huge at 800MB) and then shred it using standard xpath queries and some pretty dodgy workarounds to handle no doubt well-formed but still tricky variations on how column data is represented.
Edit: changed title to more closely reflect the issue
As well as the registry key, when connectting to your excel file have you tried setting the following:
;Extended Properties="IMEX=1"
See here
Also see this MSDN article
I have implemented the file upload functionality with reference to this link
http://www.slideshare.net/mongodb/mongo-db-bangalore-2012-15070802
But the file is not stored into the Gridfs.
I had done some research for the same and also with reference to this blog post
http://php-and-symfony.matthiasnoback.nl/2012/10/uploading-files-to-mongodb-gridfs-2/
But again, unfortunately, I stuck with this issue since last from 15 days
please help.
Please take a look at KnpLabs/Gaufrette and the related KnpLabs/KnpGaufretteBundle
The Gaufrette bundle provides a level of abstraction around file systems and, it helped me get file-oriented operations up and running quickly. I found it very useful, and in fact the Symfony CMS package leverages this bundle. It may help you out as well.
I am helping out an organization which are planning on changing their members system. Right now their system is developed in Plone and all their data is in a Data.fs file.
Their system is down for the moment and it would take some time and effort to get it up and running.
Is there a way to get the data out from the database into a standard format such as csv files or SQL? Or do they need to get the system up and running beforehand and export the files from "within" plone?
Thanks for your help and ideas!
Kind regards,
Samuel
The Data.fs file is a Object Oriented Database file, and it is written by a framework called the ZODB. The data within it represent python instances, layed out in a tree structure.
You could open this database from a python script, but in order for you to make sense of the contained structures, you'll need access to the original class definitions that make up the stored instances. Without those class definitions all you'll get is placeholder objects (Broken objects) that are of no use at all.
As such, it's probably easier to just get the Plone instance back up and running, as it'll be easier to export the exact data you want out if you have things like the catalog (basically a specialized database index) to build your export.
It could be that this site is down because of something trivial, something we can help you with here on Stack Overflow, or on the Plone users mailinglists or in the #plone IRC channel. If you do get it up and running and have some details on what you are trying to export, we certainly can help.
You'll need to get the system up and running to export data. Data in the data.fs file is stored as Python pickles and is not intelligible to "outside" systems.
As the others have pointed out before, your best course would be to have Plone running back again. After doing so, try csvreplicata to export existing data to csv format. And for user accounts, try atreal.usersinout.
If you need professional help, you can search for available providers from http://plone.org/support/providers
For free support, post specific problems here.
Recently I managed to export Plone 4 site to sqlite using SQLExporter: http://plone.org/products/proteon.sqlexporter. But you need to get your Plone instance working first to use it.