Haxe Asynchronously Read Local File? - file

I need to do IPC with another app. That app sends data and my Haxe app receives. For some reason my choice narrows down to the file system.
My idea is to have the other app keep updating a file, and my Haxe app keeps reading from that file. The problem is I keep getting the same value, although the data file is changing. I assume I need some asynchronous reader to get the updated file, but what should I use?
Here's my code:
var tmpWaveRaw = sys.io.File.getContent("assets/wave.txt");
trace(tmpWaveRaw); //always stays the same when the app is running, but changes when the app restarts.
Thanks!
Update:
Here's an experiment I just did:
Have my code keeps printing modified time of the file every 100ms.
Let the other application modify the file and stop.
Start my application. Now it's printing the correct modified time, and it is consistent with my OS's file stat.
Let the other application modify the file again. Close.
My application still prints the old modified time, not consistent with my OS's file stat.

This should work, you need to read again the file after it changes:
var running = true;
var last = sys.FileSystem.stat("test.txt");
while (running) {
var now = sys.FileSystem.stat("test.txt");
if (last.mtime.getTime() != now.mtime.getTime()) {
last = now;
trace(sys.io.File.getContent("test.txt"));
}
}

Related

protractor: test download file without knowing filename

I followed this answer and it looks almost the thing I need.
The problem there is that he already knows the filename and I am doing e2e test for downloading a file, but the filename depends on the current time (even with milliseconds) so I don't really know the name (or it would be very difficult to get it).
I think I am missing something very simple here, but I was thinking of two ways:
Recreate filenames (with the same function that returns the name of this file) and start checking for existance of a file with that name, if it doesn't exist, then move to the next millisecond until I hit the right name.
Check the download folder for existance of "any" file, if I find one there then it should be the file I am downloading (for this case I don't know how to check an entire folder in protractor).
Hope you guys could help with these alternatives (I would like some help with point 2) or maybe give me a better one. Thanks
I ended up following #alecxe's suggestion and here is my answer:
var glob = require("glob");
browser.driver.wait(function () {
var filesArray = glob.sync(filePattern);
if (typeof filesArray !== 'undefined' && filesArray.length > 0) {
// this check is necessary because `glob.sync` can return
// an empty list, which will be considered as a valid output
// making the wait to end.
return filesArray;
}
}, timeout).then(function (filesArray) {
var filename = filesArray[0];
// now we have the filename and can do whatever we want
});
Just to add a little bit more background information to the #elRuLL's answer.
The main idea is based on 2 things:
browser.wait() fits the problem perfectly - it would execute a function continuously until it evaluates to true or a timeout is reached. And, the timeout mechanism is already built-in.
glob module provides a way to look for filenames matching a certain pattern (in the worst case, you can wait for the *.* - basically, any file to appear)

Is it possible to save a file directly from a web worker?

I have an entirely browser-based (i.e. no backend) application which analyzes XML data in files which average about 250MB each. The actual parsing and analysis happens in a web worker, which is fed data in 64KB chunks by a FileReader instance, and this is all quite performant.
I have a request from the client to expand this application so that it can generate a .zip file containing the original input file and the results of the analysis, and allow the user to save that file to her local machine. Generating a .zip file in memory with those contents isn't a problem. The problem lies in transferring that much data from the web worker which generates it back to the main browser thread, so that it can be saved; attempting to do this invariably provokes a crash or out-of-memory exception. (I've tried transferring strings all at once and a chunk at a time, and I've tried using an ArrayBuffer as a transferable object to avoid copying. All fail in the same fashion.)
Unfortunately, I don't know any way to invoke a file save operation directly from a worker thread. I know several methods of doing so from the main browser thread, but all of them require either the ability to create DOM nodes (which worker threads of course can't do), or the use of interfaces (i.e. msSaveBlob, saveAs) which no browser seems to expose to a worker thread. I've spent a while looking for possibilities on the web, but found nothing usable; FileWriterSync looked good, but only Chrome supports it, and I need to target IE and Firefox as well.
Is there a method I've overlooked for saving files directly from a web worker? If so, what is it? Or am I just out of luck here?
tl;dr demo
You don't need to copy the entire file to the client side at all. You don't even need to transfer it, in fact. First a recap.
This is how to create Blob from some typed array:
// Some arbitrary binary data
const mydata = new Uint16Array([1,2,3,4,5]);
// mydata vs. mydata.buffer does not seem to make any difference
const blob = new Blob([mydata], {type: "octet/stream"});
You can create an object URL, which is a copy of the original Blob managed by the browser and accessible as URL. I have done this with huge files without seeing performance impact:
const url = URL.createObjectURL(blob);
This is how I typically download URLs:
const link = document.createElement("a");
link.download = "data.bin";
link.href = e.data.link;
link.appendChild(new Text("Download data"));
link.addEventListener("click", function() {
this.parentNode.removeChild(this);
// remember to free the object url, but wait until the download is handled
setTimeout(()=>{URL.revokeObjectURL(e.data.link);}, 500)
});
document.body.appendChild(link);
You can trigger the download automatically by invoking click event on that link. I prefer to let the user decide when to download.
So, all together:
worker.js
// Some arbitrary binary data
const mydata = new Uint16Array([1,2,3,4,5]);
self.onmessage = function(e) {
console.log("Message: ",e.data)
switch(e.data.name) {
case "make-download" :
const blob = new Blob([mydata.buffer], {type: "octet/stream"});
const url = URL.createObjectURL(blob);
self.postMessage({name:"download-link", link:url});
break;
default:
console.error("Unknown message:", e.data.name);
}
}
main.js
var worker = new Worker("worker.js");
worker.addEventListener("message", function(e) {
switch(e.data.name) {
case "download-link" : {
if(e.data.error) {
console.error("Download error: ", e.data.error);
}
else {
const link = document.createElement("a");
link.download = "data.bin";
link.href = e.data.link;
link.appendChild(new Text("Download data"));
link.addEventListener("click", function() {
this.parentNode.removeChild(this);
// remember to free the object url, but wait until the download is handled
setTimeout(()=>{URL.revokeObjectURL(e.data.link);}, 500)
});
document.body.appendChild(link);
}
break;
}
default:
console.error("Unknown message:", e.data.name);
}
});
function requestDownload() {
worker.postMessage({name:"make-download"});
}
When I click Download in my demo, I can see this in my HEX editor:
Looks just fine :)

How to Properly Call API and Cache the Data (Node/Angular)?

I'm currently working on a project that requires me to make an API call. It only allows me to make 500 requests / 10 mins but the data returned (object with ~800 properties) only changes every few months so I rather just cache it somewhere.
I'm very new to this whole thing and I'm wondering how can I make the call every few months and store the data somewhere so that I could retrieve it from the client whenever needed?
Thanks in advance!
Since you want to store your object for a longer period of time, I would suggest writing it to disk rather than caching it in memory (in case your node app crashes).
You didn't mention it precisely, but I assume you are referring to a simple javascript object, which you want to store? To store such an object to disk, you can do the following:
var fs = require("fs");
// with your object being stored in the variable "myObject", after your API call:
var myObject = ....
fs.writeFile( "myFilename.json", JSON.stringify(myObject), "utf8", function(err) {
if(err) {
return console.log(err);
}
// do whatever you want to do after file has been saved...
});
To read the object from disk, simply do:
myObject = require("./filename.json");

How to delete a File in Google Drive?

How do I write a Google Apps Script that deletes files?
This finds files:
var ExistingFiles = DocsList.find(fileName);
But DocsList.deleteFile does not exist to delete a file.
Is there a way to move those files to another Folder or to Trash?
The other workaround I would consider is to be able to override an existing file with the same name.
Currently when I want to create a file with a name already used in MyDrive then it creates a second file with the same name. I would like to keep 1 file (the new one is kept and the old one is lost).
There are 3 services available to delete a file.
DriveApp - Built-in to Apps Script
Advanced Drive Service - Built-in to Apps Script but must be enabled. Has more capability than DriveApp
Google Drive API - Not built-in to Apps Script, but can be used from Apps Script using the Drive REST API together with UrlFetchApp.fetch(url,options)
The DocsList service is now deprecated.
The Advanced Drive Service can be used to delete a file without sending it to the trash. Seriously consider the risk of not being able to retrieve the deleted file. The Advanced Drive Service has a remove method which removes a file without sending it to the trash folder. Advanced services have many of the same capabilities as the API's, without needing to make an HTTPS GET or POST request, and not needing an OAuth library.
function delteFile(myFileName) {
var allFiles, idToDLET, myFolder, rtrnFromDLET, thisFile;
myFolder = DriveApp.getFolderById('Put_The_Folder_ID_Here');
allFiles = myFolder.getFilesByName(myFileName);
while (allFiles.hasNext()) {//If there is another element in the iterator
thisFile = allFiles.next();
idToDLET = thisFile.getId();
//Logger.log('idToDLET: ' + idToDLET);
rtrnFromDLET = Drive.Files.remove(idToDLET);
};
};
This combines the DriveApp service and the Drive API to delete the file without sending it to the trash. The Drive API method .remove(id) needs the file ID. If the file ID is not available, but the file name is, then the file can first be looked up by name, and then get the file ID.
In order to use DriveAPI, you need to add it through the Resources, Advanced Google Services menu. Set the Drive API to ON. AND make sure that the Drive API is turned on in your Google Cloud Platform. If it's not turned on in BOTH places, it won't be available.
Now you may use the following if the file is as a spreadsheet, doc etc.:
DriveApp.getFileById(spreadsheet.getId()).setTrashed(true);
or if you already have the file instead of a spreadsheet, doc etc. you may use:
file.setTrashed(true);
This code uses the DocsList Class which is now deprecated.
try this :
function test(){
deleteDocByName('Name-of-the-file-to-delete')
}
function deleteDocByName(fileName){
var docs=DocsList.find(fileName)
for(n=0;n<docs.length;++n){
if(docs[n].getName() == fileName){
var ID = docs[n].getId()
DocsList.getFileById(ID).setTrashed(true)
}
}
}
since you can have many docs with the same name I used a for loop to get all the docs in the array of documents and delete them one by one if necessary.
I used a function with the filename as parameter to simplify its use in a script, use test function to try it.
Note : be aware that all files with this name will be trashed (and recoverable ;-)
About the last part of your question about keeping the most recent and deleting the old one, it would be doable (by reading the last accessed date & time) but I think it is a better idea to delete the old file before creating a new one with the same name... far more logical and safe !
Though the The service DocsList is now deprecated, as from the Class Folder references, the settrashed method is still valid:
https://developers.google.com/apps-script/reference/drive/folder#settrashedtrashed
So should work simply this:
ExistingFiles.settrashed(true);
Here is another way to do it without the need of Drive API. (based on Allan response).
function deleteFile(fileName, folderName) {
var myFolder, allFiles, file;
myFolder = DriveApp.getFoldersByName(folderName).next();
allFiles = myFolder.getFilesByName(fileName);
while (allFiles.hasNext()) {
file = allFiles.next();
file.getParents().next().removeFile(file);
}
}
Here is a slightly modified version using the above. This will backup said file to specified folder, also remove any old previous backups with the same name so there are no duplicates.
The idea is here to backup once per day, and will retain 1 month of backups in your backup folder of choice. Remember to set your trigger to daily in your Apps Script.
https://gist.github.com/fmarais/a962a8b54ce3f53f0ed57100112b453c
function archiveCopy() {
var file = DriveApp.getFileById("original_file_id_to_backup");
var destination = DriveApp.getFolderById("backup_folder_name");
var timeZone = Session.getScriptTimeZone();
var formattedDate = Utilities.formatDate(new Date(),timeZone,"dd"); // 1 month backup, one per day
var name = SpreadsheetApp.getActiveSpreadsheet().getName()+"_"+formattedDate;
// remove old backup
var allFiles = destination.getFilesByName(name);
while (allFiles.hasNext()) {
var thisFile = allFiles.next();
thisFile.setTrashed(true);
};
// make new backup
file.makeCopy(name,destination);
}

Write multiple times(Enities) using RPC proxyclass in Google App-Engine from Android

I am using App Engine Connected Android Plugin support and customized the sample project shown in Google I/O. Ran it successfully. I wrote some Tasks from Android device to cloud succesffully using the code.
CloudTasksRequestFactory factory = (CloudTasksRequestFactory) Util
.getRequestFactory(CloudTasksActivity.this,
CloudTasksRequestFactory.class);
TaskRequest request = factory.taskRequest();
TaskProxy task = request.create(TaskProxy.class);
task.setName(taskName);
task.setNote(taskDetails);
task.setDueDate(dueDate);
request.updateTask(task).fire();
This works well and I have tested.
What I am trying to now is I have an array String[][] addArrayServer and want to write all the its elements to the server. The approach I am using is:
NoteSyncDemoRequestFactory factory = Util.getRequestFactory(activity,NoteSyncDemoRequestFactory.class);
NoteSyncDemoRequest request = factory.taskRequest();
TaskProxy task;
for(int ik=0;ik<addArrayServer.length;ik++) {
task = request.create(TaskProxy.class);
Log.d(TAG,"inside uploading task:"+ik+":"+addArrayServer[ik][1]);
task.setTitle(addArrayServer[ik][1]);
task.setNote(addArrayServer[ik][2]);
task.setCreatedDate(addArrayServer[ik][3]);
// made one task
request.updateTask(task).fire();
}
One task is uploaded for sure, the first element of the array. But hangs when making a new instance of task. I am pretty new to Google-Appengine. Whats the right way to call RPC, to upload multiple entities really fast??
Thanks.
Well answer to this queston is that request.fire() can be called only once for an request object but I was calling it every time in the loop. So it would update only once. Simple solution is to call fire() outside the loop.
NoteSyncDemoRequestFactory factory = Util.getRequestFactory(activity,NoteSyncDemoRequestFactory.class);
NoteSyncDemoRequest request = factory.taskRequest();
TaskProxy task;
for(int ik=0;ik<addArrayServer.length;ik++) {
task = request.create(TaskProxy.class);
Log.d(TAG,"inside uploading task:"+ik+":"+addArrayServer[ik][1]);
task.setTitle(addArrayServer[ik][1]);
task.setNote(addArrayServer[ik][2]);
task.setCreatedDate(addArrayServer[ik][3]);
// made one task
request.updateTask(task);
}
request.fire(); //call fire only once after all the actions are done...
For more info check out this question.. GWT RequestFactory and multiple requests

Resources