So I am new to this vscode extension api. I have this functionality where I need to take input from the user when they click on certain line and then get the 1). input value, line number and file name and 2). store it to a text file.
I am done with the first part, I am getting the data everything. Now I have to just write it to the file and if there is data already, new data should be appended not overwritten.
I have tried using fs.writeFileSync(filePath, data) and readFileSync but nothing, I do not know if I am doing it correctly. If someone can point me in the right direction I am just blank at this stage?
Any help would be appreciated, Thanks in advance.
The FS node module works fine in an extension. I use it all the time for file work in my extension. Here's a helper function to export something to a file with error handling:
/**
* Asks the user for a file to store the given data in. Checks if the file already exists and ask for permission to
* overwrite it, if so. Also copies a number extra files to the target folder.
*
* #param fileName A default file name the user can change, if wanted.
* #param filter The file type filter as used in showSaveDialog.
* #param data The data to write.
* #param extraFiles Files to copy to the target folder (e.g. css).
*/
public static exportDataWithConfirmation(fileName: string, filters: { [name: string]: string[] }, data: string,
extraFiles: string[]): void {
void window.showSaveDialog({
defaultUri: Uri.file(fileName),
filters,
}).then((uri: Uri | undefined) => {
if (uri) {
const value = uri.fsPath;
fs.writeFile(value, data, (error) => {
if (error) {
void window.showErrorMessage("Could not write to file: " + value + ": " + error.message);
} else {
this.copyFilesIfNewer(extraFiles, path.dirname(value));
void window.showInformationMessage("Diagram successfully written to file '" + value + "'.");
}
});
}
});
}
And here an example where I read a file without user intervention:
this.configurationDone.wait(1000).then(() => {
...
try {
const testInput = fs.readFileSync(args.input, { encoding: "utf8" });
...
} catch (e) {
...
}
...
});
Related
I'm developing a SwiftUI document-based app that contains some easily serializable data plus multiple images. I'd like to save the document as a package (i.e, a folder) with one file containing the easily serialized data and a subfolder containing the images as separate files. My package directory should look something like this:
<UserChosenName.pspkg>/. // directory package containing my document data and images
PhraseSet.dat // regular file with serialized data from snapshot
Images/ // subdirectory for images (populated directly from my app as needed)
Image0.png
Image1.png
....
I've created a FileWrapper subclass that sets up the directory structure and adds the serialized snapshot appropriately but when I run the app in an iOS simulator and click on "+" to create a new document the app runs through the PkgFileWrapper init() and write() without error but returns to the browser window without apparently creating anything. I have declared that the Exported and Imported Type Identifiers conform to "com.apple.package". Can anyone suggest a way to get this working?
The PkgFileWrapper class looks like this:
class PkgFileWrapper: FileWrapper {
var snapshot: Data
init(withSnapshot: Data) {
self.snapshot = withSnapshot
let sWrapper = FileWrapper(regularFileWithContents: snapshot)
let dWrapper = FileWrapper(directoryWithFileWrappers: [:])
super.init(directoryWithFileWrappers: ["PhraseSet.dat" : sWrapper,
"Images" : dWrapper ])
// NOTE: Writing of images is done outside
// of the ReferenceFileDocument functionality.
}
override func write(to: URL,
options: FileWrapper.WritingOptions,
originalContentsURL: URL?) throws {
try super.write(to: to, options: options,
originalContentsURL: originalContentsURL)
}
required init?(coder inCoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
The solution is to not override PkgFileWrapper.write(...). If the directory structure is set up correctly in the init(...) then the files and directories will be created automatically. The overridden write(...) function above has now been corrected.
If you want to write an image to the Images subdirectory, you could do something like the following:
func addImage(image: UIImage, name: String) {
let imageData = image.pngData()!
imageDirWrapper.addRegularFile(withContents: imageData,
preferredFilename: name)
}
The value of imageDirWrapper is the directory wrapper corresponding to the directory that holds your images, as created in PkgFileWrapper.init() above. A key concept you need to keep in mind here is that the "write" function will get called automatically at the appropriate time - you don't explicitly write out your image data. The ReferenceFileDocument class will arrange for that and will also arrange for your app to be passed the appropriate URL for setting up your file wrappers.
The imageDirWrapper variable is set in the required init(...) for the ReferenceFileDocument protocol:
required init(configuration: ReadConfiguration) throws {
phraseSet = PhraseSet()
if configuration.file.isDirectory {
if let subdir = configuration.file.fileWrappers {
// first load in the phraseSet
for (name, wrapper) in subdir {
if name == PkgFileWrapper.phraseSetFileName {
if let data = wrapper.regularFileContents {
phraseSet = try PhraseSet(json: data)
}
}
}
// next load in the images and put them into the phrases.
for (name, wrapper) in subdir {
if name == PkgFileWrapper.imageDirectoryName {
if let imageDir = wrapper.fileWrappers {
imageDirWrapper = wrapper
for (iName, iWrapper) in imageDir {
print("image file: \(iName)")
if let d = iWrapper.regularFileContents {
for p in phraseSet.phrases {
if p.imageName == iName {
// TBD: downsample
var uiD = ImageData(data: d)
if doDownSample {
uiD.uiimageData = downsample(data: d,
to: imageSize)
} else {
_ = uiD.getUIImage()
}
images[iName] = uiD
}
}
}
}
}
}
}
}
} else {
throw CocoaError(.fileReadCorruptFile)
}
You can see here how imageDirWrapper is set by looking through the passed-in directory's subdirectories for the image directory name. Also some bonus code: it first looks through the passed-in directory for the data file and loads it in; then it looks for the image directory and processes it.
I reviewed the following documentation from Google on how to optimize existing Google scripts here:
https://developers.google.com/apps-script/guides/support/best-practices
In particular, the 'Use batch-operation' section seems more appropriate for my use case, where the optimal strategy is to 'batch' all the reading into one operation, and then writing in separate operation; not to cycle between read-and-write calls.
Here is an example of inefficient code, as given by the url above:
// DO NOT USE THIS CODE. It is an example of SLOW, INEFFICIENT code.
// FOR DEMONSTRATION ONLY
var cell = sheet.getRange('a1');
for (var y = 0; y < 100; y++) {
xcoord = xmin;
for (var x = 0; x < 100; x++) {
var c = getColorFromCoordinates(xcoord, ycoord);
cell.offset(y, x).setBackgroundColor(c);
xcoord += xincrement;
}
ycoord -= yincrement;
SpreadsheetApp.flush();
}
Here is an example of efficient and improved code:
// OKAY TO USE THIS EXAMPLE or code based on it.
var cell = sheet.getRange('a1');
var colors = new Array(100);
for (var y = 0; y < 100; y++) {
xcoord = xmin;
colors[y] = new Array(100);
for (var x = 0; x < 100; x++) {
colors[y][x] = getColorFromCoordinates(xcoord, ycoord);
xcoord += xincrement;
}
ycoord -= yincrement;
}
sheet.getRange(1, 1, 100, 100).setBackgroundColors(colors);
Now, for my particular use case:
Instead of storing values in an array, then writing/modifying them as a separate operation from reading them into an array, I want to create multiple Google documents that replaced placeholders within each document.
For context:
I'm writing a script that reads a spreadsheet of students with files to modify for each student, which is later sent as a mail merge. For example, there are 3 master files. Each student will have a copy of the 3 master files, which is used to .replaceText placeholder fields.
Here are my relevant snippets of code below:
function filesAndEmails() {
// Import the Spreadsheet application library.
const UI = SpreadsheetApp.getUi();
// Try calling the functions below; catch any error messages that occur to display as alert window.
try {
// Prompt and record user's email draft template.
// var emailLinkID = connectDocument(
// UI,
// title="Step 1/2: Google Document (Email Draft) Connection",
// dialog=`What email draft template are you referring to?
// This file should contain the subject line, name and body.
// Copy and paste the direct URL link to the Google Docs:`,
// isFile=true
// );
// TEST
var emailLinkID = "REMOVED FOR PRIVACY";
if (emailLinkID != -1) {
// Prompt and record user's desired folder location to store generated files.
// var fldrID = connectDocument(
// UI,
// title="Step 2/2: Google Folder (Storage) Connection",
// dialog=`Which folder would you like all the generated file(s) to be stored at?
// Copy and paste the direct URL link to the Google folder:`,
// isFile=false
// );
// TEST
var fldrID = DriveApp.getFolderById("REMOVED FOR PRIVACY");
// Retrieve data set from database.
var sheet = SpreadsheetApp.getActive().getSheetByName(SHEET_1);
// Range of data must include header row for proper key mapping.
var arrayOfStudentObj = objectify(sheet.getRange(3, 1, sheet.getLastRow()-2, 11).getValues());
// Establish array of attachment objects for filename and file url.
var arrayOfAttachObj = getAttachments();
// Opportunities for optimization begins here.
// Iterate through array of student Objects to extract each mapped key values for Google document insertion and emailing.
// Time Complexity: O(n^3)
arrayOfStudentObj.forEach(function(student) {
if (student[EMAIL_SENT_COL] == '') {
try {
arrayOfAttachObj.forEach(function(attachment) {
// All generated files will contain this filename format, followed by the attachment filename/description.
var filename = `${student[RYE_ID_COL]} ${student[FNAME_COL]} ${student[LNAME_COL]} ${attachment[ATTACH_FILENAME_COL]}`;
// Create a copy of the current iteration/file for the given student.
var file = DocumentApp.openById(DriveApp.getFileById(getID(attachment[ATTACH_FILEURL_COL], isFile=false)).makeCopy(filename, fldrID).getId())
// Replace and save all custom fields for the given student at this current iteration/file.
replaceCustomFields(file, student);
});
} catch(e) {
}
}
});
UI.alert("Script successfully completed!");
};
} catch(e) {
UI.alert("Error Detected", e.message + "\n\nContact a developer for help.", UI.ButtonSet.OK);
};
}
/**
* Replaces all fields specified by 'attributesArray' given student's file.
* #param {Object} file A single file object used to replace all custom fields with.
* #param {Object} student A single student object that contains all custom field attributes.
*/
function replaceCustomFields(file, student) {
// Iterate through each student's attribute (first name, last name, etc.) to change each field.
attributesArray.forEach(function(attribute) {
file.getBody()
.replaceText(attribute, student[attribute]);
});
// Must save and close file to finalize changes prior to moving onto next student object.
file.saveAndClose();
}
/**
* Processes the attachments sheet for filename and file ID.
* #return {Array} An array of attachment file objects.
*/
function getAttachments() {
var files = SpreadsheetApp.getActive().getSheetByName(SHEET_2);
return objectify(files.getRange(1, 1, files.getLastRow(), 2).getValues());
}
/**
* Creates student objects to contain the object attributes for each student based on
* the header row.
* #param {Array} array A 2D heterogeneous array includes the header row for attribute key mapping.
* #return {Array} An array of student objects.
*/
function objectify(array) {
var keys = array.shift();
var objects = array.map(function (values) {
return keys.reduce(function (o, k, i) {
o[k] = values[i];
return o;
}, {});
});
return objects;
}
To summarize my code, I read the Google spreadsheet of students as an array of objects, so each student has attributes like their first name, last name, email, etc. I have done the same for the file attachments that would be included for each student. Currently, the forEach loop iterates through each student object, creates copies of the master file(s), replaces placeholder text in each file, then saves them in a folder. Eventually, I will be sending these file(s) to each student with the MailApp. However, due to the repetitive external calls via creating file copies for each student, the execution time is understandably very slow...
TLDR
Is it still possible to optimize my code using "batch operations" when it is necessary for my use case to have multiple DriveApp calls to create said copies of the files for modification purposes? As opposed to reading raw values into an array and modifying them at a later operation, I don't think I could simply just store document objects into an array, then modify them at a later stage. Thoughts?
You could use batchUpdate of Google Docs API.
See Tanaike's answer just to have an idea on how the request object looks like.
All you need to do in your Apps Script now is build the object for multiple files.
Note:
You can also further optimize your code by updating:
var arrayOfStudentObj = objectify(sheet.getRange(3, 1, sheet.getLastRow()-2, 11).getValues();
into:
// what column your email confirmation is which is 0-index
// assuming column K contains the email confirmation (11 - 1 = 10)
var emailSentColumn = 10;
// filter data, don't include rows with blank values in column K
var arrayOfStudentObj = objectify(sheet.getRange(3, 1, sheet.getLastRow()-2, 11).getValues().filter(row=>row[emailSentColumn]));
This way, you can remove your condition if (student[EMAIL_SENT_COL] == '') { below and lessen the number of loops.
Resource:
Google Docs Apps Script Quickstart
Google Docs REST API
In the Flutter/Dart app that I am currently working on need to download large files from my servers. However, instead of storing the file in local storage what I need to do is to parse its contents and consume it one-off. I thought the best way to accomplish this was by implementing my own StreamConsumer and overriding the relvant methods. Here is what I have done thus far
import 'dart:io';
import 'dart:async';
class Accumulator extends StreamConsumer<List<int>>
{
String text = '';
#override
Future<void> addStream(Stream<List<int>> s) async
{
print('Adding');
//print(s.length);
return;
}
#override
Future<dynamic> close() async
{
print('closed');
return Future.value(text);
}
}
Future<String> fileFetch() async
{
String url = 'https://file.io/bse4moAYc7gW';
final HttpClientRequest request = await HttpClient().getUrl(Uri.parse(url));
final HttpClientResponse response = await request.close();
return await response.pipe(Accumulator());
}
Future<void> simpleFetch() async
{
String url = 'https://file.io/bse4moAYc7gW';
final HttpClientRequest request = await HttpClient().getUrl(Uri.parse(url));
final HttpClientResponse response = await request.close();
await response.pipe(File('sample.txt').openWrite());
print('Simple done!!');
}
void main() async
{
print('Starting');
await simpleFetch();
String text = await fileFetch();
print('Finished! $text');
}
When I run this in VSCode here is the output I get
Starting
Simple done!! //the contents of the file at https://file.io/bse4moAYc7gW are duly saved in the file
sample.txt
Adding //clearly addStream is being called
Instance of 'Future<int>' //I had expected to see the length of the available data here
closed //close is clearly being called BUT
Finished! //back in main()
My understanding of the underlying issues here is still rather limited. My expectation
I had thought that I would use addStream to accumulate the contents being downloaded until
There is nothing more to download at which point close would be called and the program would display exited
Why is addStream showing instance of... rather than the length of available content?
Although the VSCode debug console does display exited this happens several seconds after closed is displayed. I thought this might be an issue with having to call super.close() but not so. What am I doing wrong here?
I was going to delete this question but decided to leave it here with an answer for the benefit of anyone else trying to do something similar.
The key point to note is that the call to Accumulator.addStream does just that - it furnishes a stream to be listened to, no actual data to be read. What you do next is this
void whenData(List<int> data)
{
//you will typically get a sequence of one or more bytes here.
for(int value in data)
{
//accumulate the incoming data here
}
return;
}
function void whenDone()
{
//now that you have all the file data *accumulated* do what you like with it
}
#override
Future<void> addStream(Stream<List<int>> s) async
{
s.listen(whenData,onDone:whenDone);
//you can optionally ahandler for `onError`
}
To merge Storage files in Codename One I elaborated this solution:
/**
* Merges the given list of Storage files in the output Storage file.
* #param toBeMerged
* #param output
* #throws IOException
*/
public static synchronized void mergeStorageFiles(List<String> toBeMerged, String output) throws IOException {
if (toBeMerged.contains(output)) {
throw new IllegalArgumentException("The output file cannot be contained in the toBeMerged list of input files.");
}
// Note: the temporary file used for merging is placed in the FileSystemStorage because it offers the method
// openOutputStream(String file, int offset) that allows appending to a stream. Storage doesn't have a such method.
long writtenBytes = 0;
String tempFile = FileSystemStorage.getInstance().getAppHomePath() + "/tempFileUsedInMerge";
for (String partialFile : toBeMerged) {
InputStream in = Storage.getInstance().createInputStream(partialFile);
OutputStream out = FileSystemStorage.getInstance().openOutputStream(tempFile, (int) writtenBytes);
Util.copy(in, out);
writtenBytes = FileSystemStorage.getInstance().getLength(tempFile);
}
Util.copy(FileSystemStorage.getInstance().openInputStream(tempFile), Storage.getInstance().createOutputStream(output));
FileSystemStorage.getInstance().delete(tempFile);
}
This solution is based on the API FileSystemStorage.openOutputStream(String file, int offset), that is the only API that I found to allow to append the content of a file to another.
Are there other API that can be used to append or merge files?
Thank you
Since you end up copying everything to a Storage entry I don't see the value of using FileSystemStorage as an intermediate merging tool.
The only reason I can think of is integrity of the output file (e.g. if failure happens while writing) but that can happen here too. You can guarantee integrity by setting a flag e.g. creating a file called "writeLock" and deleting it when write has finished successfully.
To be clear I would copy like this which is simpler/faster:
try(OutputStream out = Storage.getInstance().createOutputStream(output)) {
for (String partialFile : toBeMerged) {
try(InputStream in = Storage.getInstance().createInputStream(partialFile)) {
Util.copyNoClose(in, out, 8192);
}
}
}
I need download a file from a directory in Symfony. The problem is, I need download this file when I am authenticated.
For that, I create a endpoint (GET method) with an ID (the id file), and I am doing the call from a React app passing in the Headers an authorization bearer.
The problem is, the file is not download directly, but if I enter in DevTools from Chrome, I see that in the response exists the file, but it is not downloaded.
This is the PHP code:
$file = new File($fullPath);
return $this->file($file);
How could I download the file?
This is an example code of mine:
/**
* #param Product $product
* #param TranslatorInterface $translator
*
* #SWG\Response(
* response=200,
* description="Get allergensheet of a product",
* )
* #SWG\Tag(name="Product")
*
* #Route("/allergensheet/{id}", methods="GET")
*
* #Security(name="Bearer")
* #return Mixed
*/
public function downloadAllergenSheet(Product $product, TranslatorInterface $translator) {
$allergens = $product->getAllergens();
if($allergens instanceof Allergens) {
if($allergens->getHasSheet()) {
$fs = new Filesystem();
if($fs->exists($translator->trans($product->getArticleNr() . '_allergensheet'))) {
return new BinaryFileResponse($translator->trans($product->getArticleNr() . '_allergensheet'));
}
else {
return new JsonResponse(false, Response::HTTP_NO_CONTENT);
}
}
else {
return new JsonResponse(false, Response::HTTP_NO_CONTENT);
}
}
else {
return new JsonResponse(false, Response::HTTP_NO_CONTENT);
}
}
maybe this can help?
This if for downloading a PDF file which contains some information about a product.