I've found uses of the following, but no documentation for other possible actions using the browserstack_executor:
fileExists
getFileContent
getFileProperties
setSessionStatus
I'm looking for a removeFile or unlinkFile or deleteFile to remove a file that was downloaded by the browser and is now in the way when the next file downloads and gets a (1) added to the filename.
In my selenium test I'm doing something like this:
if driver._is_remote:
action = {"action": "fileExists", "arguments": {"fileName": os.path.basename(self.filepath)}}
if driver.execute_script(f'browserstack_executor:{json.dumps(action)}'):
action = {"action": "getFileContent", "arguments": {"fileName": os.path.basename(self.filepath)}}
content = driver.execute_script(f'browserstack_executor:{json.dumps(action)}')
with open(self.filepath, "wb") as f:
f.write(base64.b64decode(content))
action = {"action": "deleteFile", "arguments": {"fileName": os.path.basename(self.filepath)}}
delete_status = driver.execute_script(f'browserstack_executor:{json.dumps(action)}')
I keep getting invalid action with all of the 3 I've tried so there must be something else to get rid of a file on the machine at browserstack.
I believe 'browserstack_executor' is a custom executor specific to BrowserStack and has a limited set of operations that it can perform.
The supported operations are available in their documentation:
https://www.browserstack.com/docs/automate/selenium/test-file-upload
https://www.browserstack.com/docs/automate/selenium/test-file-download
Hence, operations like removeFile or unlinkFile or deleteFile, cannot be performed, as they are not supported currently and are also not mentioned in the links shared above.
Per the companies support staff, there is no list and unlink is not supported. In order to work around it I've modified the FileExists ExpectedCondition I was using to auto increment the filename after one is pulled from the test system and to use the "next available" name so that my tests can be the same running locally or on browser stack.
Related
Fiddler has an autosave feature which unfortunately clears the captured sessions each time it saves to an .SAZ. Rather than have a folder of Fiddler save sessions (.SAZ's), I'd prefer to have one master .SAZ, albeit saved at regular intervals.
Since there doesn't appear to be an option in Fiddler to do this, is there a way to combine or merge .SAZ files?
There are two possibilities:
Use the Fiddler UI: When you execute the menu command "Load Archive..." you can append the data from the loaded SAZ file to the current session list. Therefore it is possible to load multiple SAZ files and them save them into a new SAZ file.
Use Fiddler Core: Using Fiddler Core you can develop an own .net program that merges multiple SAZ files into one new SAZ file. The methods of loading and saving sessions is pretty simple:
using Fiddler;
Session[] oLoaded1 = Utilities.ReadSessionArchive("FileToLoad1.saz", false);
Session[] oLoaded2 = Utilities.ReadSessionArchive("FileToLoad2.saz", false);
Session[] oAllSessions = ... //<merge the two arrays>
Utilities.WriteSessionArchive("Combined.saz", oAllSessions, null, false);
Sources:
http://fiddler.wikidot.com/fiddlercore-demo
Utilities.ReadSessionArchive
Utilities.WriteSessionArchive
I'm using Webdriver.io to download a file continuously
I tried the following code:
var webdriverio = require('webdriverio');
var options = {
desiredCapabilities: {
browserName: 'chrome'
// waitforTimeout: 1000000
}
};
webdriverio
.remote(options)
.init()
.url('https://xxx')
.setValue('#username', ‘xxx#gmail.com’)
.click('#login-submit')
.pause(1000)
.setValue('#password’,’12345’)
.click('#login-submit')
.getTitle().then(function(title){
console.log('Title was: ' + title);
})
.pause(20000)
.getUrl().then(function(url){
console.log('URL: ' + url);
})
.getTitle().then(function(title){
console.log('Title was: ' + title);
})
.click("a[href='/wiki/admin'] button.iwdh")
.getUrl().then(function (url) {
console.log('URL after settings ' + url);
})
.pause(3000)
.scroll('div.jsAtfH',0,1000)
.click("a[href='/wiki/plugins/servlet/ondemandbackup/admin']")
.pause(10000)
.click('//*[#id="backup"]/a')
//.pause(400000)
.end();
Note: The file size is 7GB and how long it will take to download is depend upon the network so instead of using pause() and timeout() is there any way to do it using webdriver.io or node.js ?
To begin with, your current task (waiting for a HUUUUGE file to download) is not a common use-case when it comes to Webdriver-based automation frameworks, WebdriverIO included. Such frameworks aren't meant to download massive files.
First off, you're confusing the waitforTimeout value with WebdriverIO test timeout. Your test is timing out before the .pause() ends.
Currently you're running your tests via the WebdriverIO test-runner. If you want to increase the test timeout, you have to use a different test framework (Mocha, Jasmine, or Cucumber) and set its timeout value to w/e you find appropriate. Going on, I recommend you use Mocha (coming from an ex-Cucumber guy).
You will have to install Mocha: npm install --save-dev wdio-mocha-framework and run your tests with it. Your test should look like this afterwards:
describe("Your Testsuite", function() {
it("\nYour Testcase\n", function() {
return browser
.url('https://xxx')
.setValue('#username', ‘xxx#gmail.com’)
.click('#login-submit')
// rest of the steps
.scroll('div.jsAtfH',0,1000)
.click("a[href='/wiki/plugins/servlet/ondemandbackup/admin']")
.pause(10000)
.click('//*[#id="backup"]/a')
)}
)}
Your config (wdio.conf.js) should contain the following:
framework: 'mocha',
mochaOpts: {
ui: 'bdd',
timeout: 99999999
}
As a side-note, I tried waiting a very long time (> 30 mins) using the above config and had no issues what-so-ever.
Let me know if this helps. Cheers!
If you click on a download button in your browser and you close your browser then your download will be also closed. If you are owning the website where you click on the download button then try to rewrite your code that you have a download able url. Then you can search for a module or way to download files from http url. If you are not the owner and you cant find a url in the href then you can maybe get the generated download url from the network section at your inspector.
Also I never heard that a browser gets closed after timeout? Maybe it comes from webdriver.io I never let my chrome so long open with webdriver.io
You can try to make a workaround use Intervall each 1 Minute as example and then use a webdriver.io command to don´t timeout.
I know it's very old question but I wanted to answer question from comment (and have no such possibility yet). But I will answer main question too.
When i am giving timeout in "wdio.conf.js" file it's not able to
downlaod file it's closing the session but by giving .pause(2000000)
in webdriver.io code it's able to download file of 7GB. What is the
use of timeout in "wdio.conf.js" file if it's kicking out the session
without downlaod?
So this timeout is related to elements state during the test run. So it "determines how long the instance should wait for that element to reach the state".
https://webdriver.io/docs/timeouts.html - this can help. But to answer the question too:
There are more many timeouts such test deals with. Like iamdanchiv wrote for this you should try using one of automatically supported frameworks like Mocha or Jasmine.
IMO right now the easiest way would to do the quick fresh setup using CLI provided by WDIO:
https://webdriver.io/docs/gettingstarted.html
Where you can just simply pick the additional framework you want to use. I would suggest using Jasmine and Chromedriver for this. Than in your wdio.conf.js you can change this part:
waitforTimeout: 10000,
jasmineNodeOpts: {
// Jasmine default timeout
defaultTimeoutInterval: 60000,
//
},
To something that works for you. Or you can use boilerplate projects from wdio page like this one:
https://webdriver.io/docs/boilerplate.html
But that's not all! Still you will have to create some method or function that checks for the file. So check where do you download the file or make it download where you want to and then create a method that uses some kind of wait:
https://webdriver.io/docs/api/browser/waitUntil.html
browser.waitUntil(condition, { timeout, timeoutMsg, interval })
So you can set the timeout either here or in wdio.conf in 'waitforTimeout'. Inside this method condition you can use node filesystem (https://nodejs.org/api/fs.html) to check the state of the file.
This can be helpful to get through waiting for file condition:
https://blog.kevinlamping.com/downloading-files-using-webdriverio/
Good day all,
I'm currently trying to run cucumber tests on a reactjs component, dropdown search selection, running in headless mode, using PhantonJS, but it is causing a weird situation that preventing me from completely these tests.
Using the following reactJs dropdown, http://jedwatson.github.io/react-select/, it is the "'Github users (Async with fetch.js)'"
The current issue that is according is when the scenarios gets to it fourth example test it fails but the same code is used to pass the first three tests.
I thought it was the fourth example so I changed it around with other values and it still fails on the fourth step.
This is the code used to enter the value into the drop down search
find(".Select").trigger("click")
fix_overlap = %{ $('.Select-placeholder').css('z-index', -99999) }
page.execute_script(fix_overlap)
find(".Select .Select-input input").native.send_keys(with)
find(".Select-menu-outer", text: with, visible: :all, match: :first).click
The react control is doing async call to search for the input data from an API endpoint.
I able to run the test in a browser with no issues.
The error that is being returned from the test is that I can't found the value in the drop down.
I have added the options to the environment setup when I'm registering poltergeist,
options = {:js_errors => false, phantomjs_options: ['--debug=true'], debug: false }
Capybara.register_driver :poltergeist do |app|
Capybara::Poltergeist::Driver.new(app, options)
end
to see if there is an internal error that is not shown in the debug console.
I have done a page.save_screenshot to see the state just before the error and the drop down has the correct value.
Questions
Is there any other options that can be added to show more information/errors?
Has anyone experienced this issue before?
I'm open to any suggestions to fix this weird behaviour.
Extra details
gem 'poltergeist','= 1.9.0'
gem 'cucumber', '~> 2.0'
For the "Cities (Large Dataset)" example on the linked page, the following code selects the "New York" entry for me, without resorting to using trigger, execute_script or native
with = "New York"
section = find('.section', text: 'Cities (Large Dataset)')
section.find('.Select').click
section.find('.Select-placeholder').send_keys(with)
section.find('.VirtualizedSelectOption', exact_text: with).click
That is using the latest Poltergeist and Capybara. Without the latest Capybara you'd probably need to pass a regex as a :text option in the last line rather than the :exact_text option (otherwise you will get multiple respones)
For the "Contributors (Async)" example
with = 'Craig Dallimore'
section = sess.find('.section', text: 'Contributors (Async)')
section.find('.Select').click
section.find('.Select-input input').send_keys(with.gsub(' ', '')
section.find('.Select-option', exact_text: with).click
will select someone
I'm trying to get JSON formatted logs on a Compute Engine VM instance to appear in the Log Viewer of the Google Developer Console. According to this documentation it should be possible to do so:
Applications using App Engine Managed VMs should write custom log
files to the VM's log directory at /var/log/app_engine/custom_logs.
These files are automatically collected and made available in the Logs
Viewer.
Custom log files must have the suffix .log or .log.json. If the suffix
is .log.json, the logs must be in JSON format with one JSON object per
line. If the suffix is .log, log entries are treated as plain text.
This doesn't seem to be working for me: logs ending with .log are visible in the Log Viewer, but displayed as plain text. Logs ending with .log.json aren't visible at all.
It also contradicts another recent article that states that file names must end in .log and its contents are treated as plain text.
As far as I can tell Google uses fluentd to index the log files into the Log Viewer. In the GitHub repository I cannot find any evidence that .log.json files are being indexed.
Does anyone know how to get this working? Or is the documentation out-of-date and has this feature been removed for some reason?
Here is one way to generate JSON logs for the Managed VMs logviewer:
The desired JSON format
The goal is to create a single line JSON object for each log line containing:
{
"message": "Error occurred!.",
"severity": "ERROR",
"timestamp": {
"seconds": 1437712034000,
"nanos": 905
}
}
(information sourced from Google: https://code.google.com/p/googleappengine/issues/detail?id=11678#c5)
Using python-json-logger
See: https://github.com/madzak/python-json-logger
def get_timestamp_dict(when=None):
"""Converts a datetime.datetime to integer milliseconds since the epoch.
Requires special handling to preserve microseconds.
Args:
when:
A datetime.datetime instance. If None, the timestamp for 'now'
will be used.
Returns:
Integer time since the epoch in milliseconds. If the supplied 'when' is
None, the return value will be None.
"""
if when is None:
when = datetime.datetime.utcnow()
ms_since_epoch = float(time.mktime(when.utctimetuple()) * 1000.0)
return {
'seconds': int(ms_since_epoch),
'nanos': int(when.microsecond / 1000.0),
}
def setup_json_logger(suffix=''):
try:
from pythonjsonlogger import jsonlogger
class GoogleJsonFormatter(jsonlogger.JsonFormatter):
FORMAT_STRING = "{message}"
def add_fields(self, log_record, record, message_dict):
super(GoogleJsonFormatter, self).add_fields(log_record,
record,
message_dict)
log_record['severity'] = record.levelname
log_record['timestamp'] = get_timestamp_dict()
log_record['message'] = self.FORMAT_STRING.format(
message=record.message,
filename=record.filename,
)
formatter = GoogleJsonFormatter()
log_path = '/var/log/app_engine/custom_logs/worker'+suffix+'.log.json'
make_sure_path_exists(log_path)
file_handler = logging.FileHandler(log_path)
file_handler.setFormatter(formatter)
logging.getLogger().addHandler(file_handler)
except OSError:
logging.warn("Custom log path not found for production logging")
except ImportError:
logging.warn("JSON Formatting not available")
To use, simply call setup_json_logger - you may also want to change the name of worker for your log.
I am currently working on a NodeJS app running on a managed VM and I am also trying to get my logs to be printed on the Google Developper Console. I created my log files in the ‘/var/log/app_engine’ directory as described in the documentation. Unfortunately this doesn’t seem to be working for me, even for the ‘.log’ files.
Could you describe where your logs are created ? Also, is your managed VM configured as "Managed by Google" or "Managed by User" ? Thanks!
How do I write a Google Apps Script that deletes files?
This finds files:
var ExistingFiles = DocsList.find(fileName);
But DocsList.deleteFile does not exist to delete a file.
Is there a way to move those files to another Folder or to Trash?
The other workaround I would consider is to be able to override an existing file with the same name.
Currently when I want to create a file with a name already used in MyDrive then it creates a second file with the same name. I would like to keep 1 file (the new one is kept and the old one is lost).
There are 3 services available to delete a file.
DriveApp - Built-in to Apps Script
Advanced Drive Service - Built-in to Apps Script but must be enabled. Has more capability than DriveApp
Google Drive API - Not built-in to Apps Script, but can be used from Apps Script using the Drive REST API together with UrlFetchApp.fetch(url,options)
The DocsList service is now deprecated.
The Advanced Drive Service can be used to delete a file without sending it to the trash. Seriously consider the risk of not being able to retrieve the deleted file. The Advanced Drive Service has a remove method which removes a file without sending it to the trash folder. Advanced services have many of the same capabilities as the API's, without needing to make an HTTPS GET or POST request, and not needing an OAuth library.
function delteFile(myFileName) {
var allFiles, idToDLET, myFolder, rtrnFromDLET, thisFile;
myFolder = DriveApp.getFolderById('Put_The_Folder_ID_Here');
allFiles = myFolder.getFilesByName(myFileName);
while (allFiles.hasNext()) {//If there is another element in the iterator
thisFile = allFiles.next();
idToDLET = thisFile.getId();
//Logger.log('idToDLET: ' + idToDLET);
rtrnFromDLET = Drive.Files.remove(idToDLET);
};
};
This combines the DriveApp service and the Drive API to delete the file without sending it to the trash. The Drive API method .remove(id) needs the file ID. If the file ID is not available, but the file name is, then the file can first be looked up by name, and then get the file ID.
In order to use DriveAPI, you need to add it through the Resources, Advanced Google Services menu. Set the Drive API to ON. AND make sure that the Drive API is turned on in your Google Cloud Platform. If it's not turned on in BOTH places, it won't be available.
Now you may use the following if the file is as a spreadsheet, doc etc.:
DriveApp.getFileById(spreadsheet.getId()).setTrashed(true);
or if you already have the file instead of a spreadsheet, doc etc. you may use:
file.setTrashed(true);
This code uses the DocsList Class which is now deprecated.
try this :
function test(){
deleteDocByName('Name-of-the-file-to-delete')
}
function deleteDocByName(fileName){
var docs=DocsList.find(fileName)
for(n=0;n<docs.length;++n){
if(docs[n].getName() == fileName){
var ID = docs[n].getId()
DocsList.getFileById(ID).setTrashed(true)
}
}
}
since you can have many docs with the same name I used a for loop to get all the docs in the array of documents and delete them one by one if necessary.
I used a function with the filename as parameter to simplify its use in a script, use test function to try it.
Note : be aware that all files with this name will be trashed (and recoverable ;-)
About the last part of your question about keeping the most recent and deleting the old one, it would be doable (by reading the last accessed date & time) but I think it is a better idea to delete the old file before creating a new one with the same name... far more logical and safe !
Though the The service DocsList is now deprecated, as from the Class Folder references, the settrashed method is still valid:
https://developers.google.com/apps-script/reference/drive/folder#settrashedtrashed
So should work simply this:
ExistingFiles.settrashed(true);
Here is another way to do it without the need of Drive API. (based on Allan response).
function deleteFile(fileName, folderName) {
var myFolder, allFiles, file;
myFolder = DriveApp.getFoldersByName(folderName).next();
allFiles = myFolder.getFilesByName(fileName);
while (allFiles.hasNext()) {
file = allFiles.next();
file.getParents().next().removeFile(file);
}
}
Here is a slightly modified version using the above. This will backup said file to specified folder, also remove any old previous backups with the same name so there are no duplicates.
The idea is here to backup once per day, and will retain 1 month of backups in your backup folder of choice. Remember to set your trigger to daily in your Apps Script.
https://gist.github.com/fmarais/a962a8b54ce3f53f0ed57100112b453c
function archiveCopy() {
var file = DriveApp.getFileById("original_file_id_to_backup");
var destination = DriveApp.getFolderById("backup_folder_name");
var timeZone = Session.getScriptTimeZone();
var formattedDate = Utilities.formatDate(new Date(),timeZone,"dd"); // 1 month backup, one per day
var name = SpreadsheetApp.getActiveSpreadsheet().getName()+"_"+formattedDate;
// remove old backup
var allFiles = destination.getFilesByName(name);
while (allFiles.hasNext()) {
var thisFile = allFiles.next();
thisFile.setTrashed(true);
};
// make new backup
file.makeCopy(name,destination);
}