Fiddler has an autosave feature which unfortunately clears the captured sessions each time it saves to an .SAZ. Rather than have a folder of Fiddler save sessions (.SAZ's), I'd prefer to have one master .SAZ, albeit saved at regular intervals.
Since there doesn't appear to be an option in Fiddler to do this, is there a way to combine or merge .SAZ files?
There are two possibilities:
Use the Fiddler UI: When you execute the menu command "Load Archive..." you can append the data from the loaded SAZ file to the current session list. Therefore it is possible to load multiple SAZ files and them save them into a new SAZ file.
Use Fiddler Core: Using Fiddler Core you can develop an own .net program that merges multiple SAZ files into one new SAZ file. The methods of loading and saving sessions is pretty simple:
using Fiddler;
Session[] oLoaded1 = Utilities.ReadSessionArchive("FileToLoad1.saz", false);
Session[] oLoaded2 = Utilities.ReadSessionArchive("FileToLoad2.saz", false);
Session[] oAllSessions = ... //<merge the two arrays>
Utilities.WriteSessionArchive("Combined.saz", oAllSessions, null, false);
Sources:
http://fiddler.wikidot.com/fiddlercore-demo
Utilities.ReadSessionArchive
Utilities.WriteSessionArchive
Related
I've found uses of the following, but no documentation for other possible actions using the browserstack_executor:
fileExists
getFileContent
getFileProperties
setSessionStatus
I'm looking for a removeFile or unlinkFile or deleteFile to remove a file that was downloaded by the browser and is now in the way when the next file downloads and gets a (1) added to the filename.
In my selenium test I'm doing something like this:
if driver._is_remote:
action = {"action": "fileExists", "arguments": {"fileName": os.path.basename(self.filepath)}}
if driver.execute_script(f'browserstack_executor:{json.dumps(action)}'):
action = {"action": "getFileContent", "arguments": {"fileName": os.path.basename(self.filepath)}}
content = driver.execute_script(f'browserstack_executor:{json.dumps(action)}')
with open(self.filepath, "wb") as f:
f.write(base64.b64decode(content))
action = {"action": "deleteFile", "arguments": {"fileName": os.path.basename(self.filepath)}}
delete_status = driver.execute_script(f'browserstack_executor:{json.dumps(action)}')
I keep getting invalid action with all of the 3 I've tried so there must be something else to get rid of a file on the machine at browserstack.
I believe 'browserstack_executor' is a custom executor specific to BrowserStack and has a limited set of operations that it can perform.
The supported operations are available in their documentation:
https://www.browserstack.com/docs/automate/selenium/test-file-upload
https://www.browserstack.com/docs/automate/selenium/test-file-download
Hence, operations like removeFile or unlinkFile or deleteFile, cannot be performed, as they are not supported currently and are also not mentioned in the links shared above.
Per the companies support staff, there is no list and unlink is not supported. In order to work around it I've modified the FileExists ExpectedCondition I was using to auto increment the filename after one is pulled from the test system and to use the "next available" name so that my tests can be the same running locally or on browser stack.
Background
I am using file system storage, with the Shrine::Attachment module in a model setting (my_model), with activerecord (Rails). I am also using it in a direct upload scenario, therefore i need the response from the file upload (save to cache).
my_model.rb
class MyModel < ApplicationRecord
include ImageUploader::Attachment(:image) # adds an `image` virtual attribute
omitted relations & code...
end
my_controller.rb
def create
#my_model = MyModel.new(my_model_params)
# currently creating derivatives & persisting all in one go
#my_model.image_derivatives! if #my_model.image
if #my_model.save
render json: { success: "MyModel created successfully!" }
else
#errors = #my_model.errors.messages
render 'errors', status: :unprocessable_entity
end
Goal
Ideally i want to clear only the cached file(s) I currently have hold of in my create controller as soon as they have been persisted (the derivatives and original file) to permanent storage.
What the best way is to do this for scenario A: synchronous & scenario B: asynchronous?
What i have considered/tried
After reading through the docs i have noticed 3 possible ways of clearing cached images:
1. Run a rake task to clear cached images.
I really don't like this as i believe the cached files should be cleaned once the file has been persisted and not left as an admin task (cron job) that cant be tested with an image persistence spec
# FileSystem storage
file_system = Shrine.storages[:cache]
file_system.clear! { |path| path.mtime < Time.now - 7*24*60*60 } # delete files older than 1 week
2. Run Shrine.storages[:cache] in an after block
Is this only for background jobs?
attacher.atomic_persist do |reloaded_attacher|
# run code after attachment change check but before persistence
end
3. Move the cache file to permanent storage
I dont think I can use this as my direct upload occurs in two distinct parts: 1, immediately upload the attached file to a cached store then 2, save it to the newly created record.
plugin :upload_options, cache: { move: true }, store: { move: true }
Are there better ways of clearing promoted images from cache for my needs?
Synchronous solution for single image upload case:
def create
#my_model = MyModel.new(my_model_params)
image_attacher = #my_model.image_attacher
image_attacher.create_derivatives # Create different sized images
image_cache_id = image_attacher.file.id # save image cache file id as it will be lost in the next step
image_attacher.record.save(validate: true) # Promote original file to permanent storage
Shrine.storages[:cache].delete(image_cache_id) # Only clear cached image that was used to create derivatives (if other images are being processed and are cached we dont want to blow them away)
end
I have a Camel route that should return a file in the response, which is created based on the request data. While this works fine with the following (greatly simplified) route, the problem is that I need to first create an actual file on the server that I can then add to the exchange body.
As I don't want these file piling up on the disk, I would prefer to either not create them at all or delete them directly from the same route.
The only way around this I currently see is to have a regular cleanup job that deletes these temporary files.
Any suggestions on how to solve this in a better way?
from("cxfrs://...")
.process(exchange -> {
File file = new File("out.pdf");
// write data to new FileOutputStream(file);
exchange.getIn().setBody(file);
})
The response content type is application/octet-stream.
How do I write a Google Apps Script that deletes files?
This finds files:
var ExistingFiles = DocsList.find(fileName);
But DocsList.deleteFile does not exist to delete a file.
Is there a way to move those files to another Folder or to Trash?
The other workaround I would consider is to be able to override an existing file with the same name.
Currently when I want to create a file with a name already used in MyDrive then it creates a second file with the same name. I would like to keep 1 file (the new one is kept and the old one is lost).
There are 3 services available to delete a file.
DriveApp - Built-in to Apps Script
Advanced Drive Service - Built-in to Apps Script but must be enabled. Has more capability than DriveApp
Google Drive API - Not built-in to Apps Script, but can be used from Apps Script using the Drive REST API together with UrlFetchApp.fetch(url,options)
The DocsList service is now deprecated.
The Advanced Drive Service can be used to delete a file without sending it to the trash. Seriously consider the risk of not being able to retrieve the deleted file. The Advanced Drive Service has a remove method which removes a file without sending it to the trash folder. Advanced services have many of the same capabilities as the API's, without needing to make an HTTPS GET or POST request, and not needing an OAuth library.
function delteFile(myFileName) {
var allFiles, idToDLET, myFolder, rtrnFromDLET, thisFile;
myFolder = DriveApp.getFolderById('Put_The_Folder_ID_Here');
allFiles = myFolder.getFilesByName(myFileName);
while (allFiles.hasNext()) {//If there is another element in the iterator
thisFile = allFiles.next();
idToDLET = thisFile.getId();
//Logger.log('idToDLET: ' + idToDLET);
rtrnFromDLET = Drive.Files.remove(idToDLET);
};
};
This combines the DriveApp service and the Drive API to delete the file without sending it to the trash. The Drive API method .remove(id) needs the file ID. If the file ID is not available, but the file name is, then the file can first be looked up by name, and then get the file ID.
In order to use DriveAPI, you need to add it through the Resources, Advanced Google Services menu. Set the Drive API to ON. AND make sure that the Drive API is turned on in your Google Cloud Platform. If it's not turned on in BOTH places, it won't be available.
Now you may use the following if the file is as a spreadsheet, doc etc.:
DriveApp.getFileById(spreadsheet.getId()).setTrashed(true);
or if you already have the file instead of a spreadsheet, doc etc. you may use:
file.setTrashed(true);
This code uses the DocsList Class which is now deprecated.
try this :
function test(){
deleteDocByName('Name-of-the-file-to-delete')
}
function deleteDocByName(fileName){
var docs=DocsList.find(fileName)
for(n=0;n<docs.length;++n){
if(docs[n].getName() == fileName){
var ID = docs[n].getId()
DocsList.getFileById(ID).setTrashed(true)
}
}
}
since you can have many docs with the same name I used a for loop to get all the docs in the array of documents and delete them one by one if necessary.
I used a function with the filename as parameter to simplify its use in a script, use test function to try it.
Note : be aware that all files with this name will be trashed (and recoverable ;-)
About the last part of your question about keeping the most recent and deleting the old one, it would be doable (by reading the last accessed date & time) but I think it is a better idea to delete the old file before creating a new one with the same name... far more logical and safe !
Though the The service DocsList is now deprecated, as from the Class Folder references, the settrashed method is still valid:
https://developers.google.com/apps-script/reference/drive/folder#settrashedtrashed
So should work simply this:
ExistingFiles.settrashed(true);
Here is another way to do it without the need of Drive API. (based on Allan response).
function deleteFile(fileName, folderName) {
var myFolder, allFiles, file;
myFolder = DriveApp.getFoldersByName(folderName).next();
allFiles = myFolder.getFilesByName(fileName);
while (allFiles.hasNext()) {
file = allFiles.next();
file.getParents().next().removeFile(file);
}
}
Here is a slightly modified version using the above. This will backup said file to specified folder, also remove any old previous backups with the same name so there are no duplicates.
The idea is here to backup once per day, and will retain 1 month of backups in your backup folder of choice. Remember to set your trigger to daily in your Apps Script.
https://gist.github.com/fmarais/a962a8b54ce3f53f0ed57100112b453c
function archiveCopy() {
var file = DriveApp.getFileById("original_file_id_to_backup");
var destination = DriveApp.getFolderById("backup_folder_name");
var timeZone = Session.getScriptTimeZone();
var formattedDate = Utilities.formatDate(new Date(),timeZone,"dd"); // 1 month backup, one per day
var name = SpreadsheetApp.getActiveSpreadsheet().getName()+"_"+formattedDate;
// remove old backup
var allFiles = destination.getFilesByName(name);
while (allFiles.hasNext()) {
var thisFile = allFiles.next();
thisFile.setTrashed(true);
};
// make new backup
file.makeCopy(name,destination);
}
I have often noticed that when database insert for a model fails, data loaded previously continue to stay in the database. So when you try to load the same fixture file again, it gives an error.
Is there any way the DATA:LOAD process can be made ATOMIC, i.e. GO or NO-GO for all data, so that data is never inserted half-way.
Hopefully that should work :
Write a task that do the same as data:load but wrap it in :
$databaseManager = new sfDatabaseManager($this->configuration);
$conn = $sf_database_managaer->getDatabase('doctrine')->getDoctrineConnection();
try{
...............
}catch(Exception $e){ //maybe you can be more specific about the exception thrown
echo $e->getMessage();
$conn->rollback();
}
Fixtures are meant for loading initial data, which means that you should be able to build --all --and-load, or in other words, clear all data and re-load fixtures. It doesn't take any longer.
One option you have is to break your fixtures into multiple files and load them individually. This is also what I'd do if you first need to load large amounts of data via a script or from a CSV (i.e. something bigger than just a few fixtures). This way you don't need to redo it if you had a fixtures problem somewhere else.