How to write output to a csv file using selenium web driver - selenium-webdriver

Is there any way using which I can write output to a csv file using selenium webdriver.
Please help

I believe you want to store your Test results into CSV which can be shown to your fellow members, so you may firstly consider clubbing Selenium with one the of Test frameworks like JUNit, TestNG etc. So after every test is run you can store the values in a common results csv for a particular suite. As #AbhijeetVaikar said, you just need to make use of Java file handling to store out the output you want to.
For example on reading CSV files you can refer this
Library link - http://sourceforge.net/projects/opencsv/files/opencsv/
Example code
String csv = "C:\\output.csv";
CSVWriter writer = new CSVWriter(new FileWriter(csv));
String [] country = "India#China#United States".split("#");
writer.writeNext(country);
writer.close();

Related

Sshj library for download multifiles in remote server

I use Java 8 and my condition is to download multiple file in a remote server using sftp protocol, is not necessary to filter file for his name but necessary to download all file in a specific remote folder.
i see the library com.hierynomus ยป sshj for this scope, but looking on the net i haven't found what i need, only for download a single file.
What i think is i could use this method,
String localDir = "/home";
String remoteFile = "/home/folder/*"
SSHClient sshClient = setupSshj();
SFTPClient sftpClient = sshClient.newSFTPClient();
sftpClient.get(remoteFile, localDir);
but i'm not sure if the asterisk in the "remoteFile" will be useful for my purpose...
Unfortunately for now i can't try this on remote server ...
Someone can help me?
Thank's everyone
You need to LIST all the files you want to download:
List<RemoteResourceInfo> entries = sftpClient.ls("/home/folder")
After that you will loop the entries to download them one by one:
for (RemoteResourceInfo remoteFile : entries) {
if(remoteFile.isRegularFile()){
sftpClient.get(remoteFile.getPath(), localDir);
}
}
E: Also you should check if the list entry is really a file, edited the code accordingly. Though I am not sure whether using !remoteFile.isDirectory() would be better.

unable to scrape non-english fonts - selenium

I am new to selenium and I am trying few sites for testing purposes.
Came across a scenario where the tamil and hindi fonts are scrapped as "??????"
I tried to open the output via notepad++, sublimetext and excel but still displays as "??????"
Xpath tried - //h1//following::p[#id='topDescription']
Test URLs
"https://www.hooq.tv/catalog/7a6d593d-e8f3-47b6-92ae-469b8e08178e?__sr=feed"
"https://www.hooq.tv/catalog/d023630f-882b-4df4-8cb5-857ebfff20b4?__sr=feed"
code
d.get("https://www.hooq.tv/catalog/7a6d593d-e8f3-47b6-92ae-469b8e08178e?__sr=feed");
d.findElement(By.xpath("//h1//following::p[#id='topDescription']")).getText();
Is this something about encoding issue ?
First, make sure that you can get the raw text properly before saving it into an external file.
I tested the .getText() in java for your element and it is returning the String as-is.
Next, you need to make sure that during file writing, the charset encoding is UTF-8.
Here's a sample using org.apache.commons.io.FileUtils:
FileUtils.write(new File("C:/temp/test.txt"), str, "UTF-8");
FileUtils.write(new File("C:/temp/test.csv"), str, "UTF-8");
Hope it helps.

how to use the glade xml file to make an executable program

I am learning to use glade 3 to create a GUI.
However, the *.glade file is an xml file. I am not sure how to go forward from here. Google search is not really helping. There is a question already asked for same thing here Tool to convert .Glade (or xml) file to C source . However I am not really able to understand the answer given in that.
Can someone tell the basic flow of the development cycle using glade 3?
Design the UI in glade.
Generate the *.glade xml file.
AND THEN WHAT ????
How can the xml file be converted to an executable ?
A. Should I convert this xml file to a language (C) and compile the C code ?
B. Or is there a way for xml code to be directly converted to an ELF executable ?
I am trying to make the GUI for my own use. I use linux and want an ELF executable (like how I would get if I wrote the C code using gtk library and compiled it using gcc).
If we look at the Wikipedia page for Glade it has an entire section about how to use Glade in a program: with GtkBuilder. Now all that remains is to read the docs and you can begin using Glade. No offense, but I've never used Glade before and this is fairly clear in all docs. For example, here's Glade's homepage.
I would do something like this:
DerivedWindow::DerivedWindow()
{
mainBox = Gtk::manage(new Gtk::Box(Gtk::ORIENTATION_VERTICAL, 7));
builder = Gtk::Builder::create();
try {
builder->add_from_file("filename.glade");
} catch (Glib::Error& ex) {
errMsg("Window Builder Failed: " + ex.what());
}
Gtk::Box* box;
builder->get_widget("name of box inside main window", box);
if (!box) { this->destroy_(); return; }
box->unparent();
mainBox->pack_start(*box, Gtk::PACK_SHRINK);
//optional - if you want full access to particular widgets
builder->get_widget("name of widged id", widgetname);
//connect signals here...
add(*mainBox);
show_all();
}
Note this is Gtkmm 3+.
It is important you unparent the box you got from the glade file, so you can attach it to your derived window.

Hadoop Map Whole File in Java

I am trying to use Hadoop in java with multiple input files. At the moment I have two files, a big one to process and a smaller one that serves as a sort of index.
My problem is that I need to maintain the whole index file unsplitted while the big file is distributed to each mapper. Is there any way provided by the Hadoop API to make such thing?
In case if have not expressed myself correctly, here is a link to a picture that represents what I am trying to achieve: picture
Update:
Following the instructions provided by Santiago, I am now able to insert a file (or the URI, at least) from Amazon's S3 into the distributed cache like this:
job.addCacheFile(new Path("s3://myBucket/input/index.txt").toUri());
However, when the mapper tries to read it a 'file not found' exception occurs, which seems odd to me. I have checked the S3 location and everything seems to be fine. I have used other S3 locations to introduce the input and output file.
Error (note the single slash after the s3:)
FileNotFoundException: s3:/myBucket/input/index.txt (No such file or directory)
The following is the code I use to read the file from the distributed cache:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(cacheFile[0].toString()));
while ((line = br.readLine()) != null) {
//Do stuff
}
I am using Amazon's EMR, S3 and the version 2.4.0 of Hadoop.
As mentioned above, add your index file to the Distributed Cache and then access the same in your mapper. Behind the scenes. Hadoop framework will ensure that the index file will be sent to all the task trackers before any task is executed and will be available for your processing. In this case, data is transferred only once and will be available for all the tasks related your job.
However, instead of add the index file to the Distributed Cache in your mapper code, make your driver code to implement ToolRunner interface and override the run method. This provides the flexibility of passing the index file to Distributed Cache through the command prompt while submitting the job
If you are using ToolRunner, you can add files to the Distributed Cache directly from the command line when you run the job. No need to copy the file to HDFS first. Use the -files option to add files
hadoop jar yourjarname.jar YourDriverClassName -files cachefile1, cachefile2, cachefile3, ...
You can access the files in your Mapper or Reducer code as below:
File f1 = new File("cachefile1");
File f2 = new File("cachefile2");
File f3 = new File("cachefile3");
You could push the index file to the distributed cache, and it will be copied to the nodes before the mapper is executed.
See this SO thread.
Here's what helped me to solve the problem.
Since I am using Amazon's EMR with S3, I have needed to change the syntax a bit, as stated on the following site.
It was necessary to add the name the system was going to use to read the file from the cache, as follows:
job.addCacheFile(new URI("s3://myBucket/input/index.txt" + "#index.txt"));
This way, the program understands that the file introduced into the cache is named just index.txt. I also have needed to change the syntax to read the file from the cache. Instead of reading the entire path stored on the distributed cache, only the filename has to be used, as follows:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(#the filename#));
while ((line = br.readLine()) != null) {
//Do stuff
}

How to get the version number from a DACPAC file

I generate DACPAC files from automated builds with a generated version number. While useful during sqlpackager operations I need to be able to determine the version number of a DACPAC before doing anything with the file.
What tooling can I use (automated of course) to query the DACPAC file for its version number and description?
Hey I know you found a solution but I have an alternative method that may help someone else.
By referencing Microsoft.SqlServer.Management.Dac.dll and using the DacType class:
using System.IO;
using Microsoft.SqlServer.Management.Dac;
(Not entirely sure which using statements are needed - I have copied for a larger DAC helper file)
using (Stream dacPackFileStream = File.Open(this.dacPackFileName, FileMode.Open))
{
var dacType = DacType.Load(dacPackFileStream);
dacPackFileStream.Close();
return dacType.Version;
}
DACPAC files are actually zip files. Extract the zip and query the file DacMetaData which is a xml file. Use the XPath: /DacType/Version

Categories

Resources