PHP: transfer big file using ftp_get() - file

i´m trying to transfer a large file (~2.5GB) using ftp_get().
after starting transfer i see the file appearing on ftp and counting up.
at the end the file disappears. any ideas?
smaller files (up to 100mb) have no problems.
code is really simple ;)
return #ftp_get($obj_ftp, $str_target, $str_source, FTP_BINARY);

Related

nsight system: how to cut a .qdrep file into smaller .qdrep files?

I’ve profiled my app for 150s, and I got a large .qdrep file, which is too large for my computer to open. Is it possible to cut the 150s file and get 0s-90s, 90s-150s parts respectively?

Multithreaded compression, random access and on-the-fly reading

I have a program running on linux which generates thousand of text files. I want these files to be packed into a single (compressed) file.
The compressed file will later be opened by a C program, which needs to access specific files inside that container, in a random fashion.
The whole thing is working as follows:
Linux program generates thousands of small files
zip -9 out.zip *
C program with libzip accesing specific files inside .zip, depending on what the user requests. These reads are done on memory (no writing decompressed files to disk).
Works great. However, it takes about ~20 minutes for the compression to finish. Because such compression runs on a 40-core server, I have been experimenting with lbzip2 with excellent results in terms of both compression ratio and speed. I have also used zip -0 to pack all the .bz files into a single .zip container, which I assume is a better option than tar because of random access.
So my question is, how can I read .bz files compressed inside a .zip file? As far as I can tell, gzopen takes a file path as first argument.
You could just stick with your current zip format for random access. Run separate zip commands individually on each text file to turn them into many single entry zip files. Launch all those at once, and your 40 cores will be kept busy until done. Once done, use zipmerge to combine them all into a single zip file.

Python reading of files

I am new with Python and I am facing my first troubles.
I have to read some .dat files (100), and each file contains a set of 5000 power traces. The total amount of memory taken by the files is almost 10 GB, so I cannot read the files all toghether because I fill the RAM. So, the np.fromfile method with a for loop in every files is not usefull.
I would like to make a memory mapping, reading just few files at time, but I need to handle the data at the same time.
Do you have some suggestion?
Cheers

how to send a txt file from server to client using http in java

I have a problem in my grails application which reads a txt file stored in the disk and then sends the file to the client.
Now I am achieving this by reading the file line by line and storing them in a String array.
After reading all lines from the file, the String array is sent to the client as JSON.
In my gsp's javascript I get that array and display the array contents in a text area as
textarea.value = arr.join("\n\n");
This operation happens recursively for every 1 minute which is achieved using ajax.
My problem is, the txt which the server is reading consists of about 10,000 to 20,000 lines.
So reading all those 10,000+ lines and sending them as array creates problem in my IE8 which gets hung-up and finally crashes.
Is there any other easy way of sending the whole file through http and displaying it in browser?
Any help would be greatly appreciated.
Thanks in advance.
EDIT:
On Googling I found that, file input/output streaming is a better way to display the file contents in a browser but I couldn't find an example on how to do it.
Can anyone share some example on how to do it?

Unknown Master List .dat file, issues retrieving information

I come to you completely stumped. I do some side work for a company that uses an old DOS based program to input and retrieve data. This is a legacy piece of software, and they have since moved to either QuickBooks or Outlook for all of their address or billing related needs. However there have been some changes made, and they work with this database fairly regularly. Since the computer that this software is on, is running XP (and none of the other computers in the office can run it) they're looking to phase this software out for when the computer inevitably explodes.
TLDR; I have an old .csv file (roughly two years) that has a good chunk of information on it, but again it's two years old. I have another file called ml.dat (I'm assuming masterlist.dat) that's in the same folder as this legacy software. I open it with notepad and excel and am presented with information like this:
S;Û).;PÃS;*p(â'a,µ,
The above chunk of text is recognized much less within notepad or excel. It's a lot more of the unrecognized squares.
Some of the information is actually readable however. I can for example read the occasional town name, or person's name but I'm unable to get all of the information since there's a lot missing. Perhaps the data isn't in unicode or something? I have no idea. Any suggestions? I'm ultimately trying to take this information and toss it into either quickbooks or outlook.
Please help!
Thanks
Edit: I'm guessing the file might be encrypted since .dat's are usually clear text? Any thoughts?
.DAT files can be anything, they are usually just application data. Since there is readable text, then it is very unlikely that this file is encrypted. Instead you are seeing ASCII representations of the bytes of other content. http://www.asciitable.com/ Assuming single byte values, the number 77 might appear in the file somewhere as M.
Your options:
Search for some utility to load and translate the dat file for that application.
Set up an appropriate dos emulator so you can run this application on another box, or even a virtual machine running freedos or something.
Figure out the file format and then write a program to translate the data.
For #3, you can attach a debugger to the application to trace how the file is read and written. Alternatively you can try to figure out record boundaries (if all the records are the same size, then things are a little bit easier.) Then you can use known values to try to find field boundaries. If you can find (or reverse compile) the source code, then that could also give you insight into the file format.
1 is your best bet, and #2 will buy you some time so that you don't need that original machine anymore. #3 would likely be something to outsource.
If you can find the source or file format, then you just recreate whatever data structure was dumped to the file and read the file into it.
To find which exe opens it, you can do something like:
for %f in (*.exe) do find "ml.dat" %f -c
Assuming the original application was written in C then there would be code something like this to read the first record from the file:
struct SecretData
{
int first;
double money;
char city[10];
};
FILE* input;
struct SecretData secretdata;
input = fopen("ml.dat", "rb");
fread(&data, sizeof(data), 1, input);
fclose(input);
(The file would have been written with fwrite.) Basically you need to figure out the innards of the SecretData structure to be able to read the file.
There likely wasn't a separate utility used to make the file, dumping data and reading it back from a file is relatively easy in most languages.

Resources