I am using Selenium webdriver.
I have to read a xpath of link from a file and search whether the link is present on webpage, If it is present then click on it.
Thats it!!
Heres the file for the links
link1 //a[contains(text(), 'Volunteer Registration')]/#href
link2 //a[contains(text(), 'Sign Up')]/#href
link3 //a[contains(text(), 'Register/sign Up')]/#href
Like wise I have one file from where I'll read one link and its associated xpath and based on that Xpath I'll search whether the link is present on webpage or not.
The Code I have written for that is :
Reading data from text file into Hashtable -
public HashMap<String, String> readDataFromFile(String fileName) {
try {
FileReader fr = new FileReader(fileName);
BufferedReader br = new BufferedReader(fr);
String strLine = null;
String[] prop = null;
while ((strLine = br.readLine()) != null) {
prop = strLine.split("\t");
recruiters.put(prop[0], prop[1]);
}
br.close();
fr.close();
} catch (Exception exception) {
System.out.println("Unable to read data from recruiter file: " + exception.getMessage());
}
return recruiters;
}
Method to return xpath value from hashtable based on the key
public String findValue(String Name){
for (String s: HashTable.entrySet()) {
Map.Entry entry = (Map.Entry) s;
if(entry.getKey().equale(Name))
{
String value= entry.getValue();
return value;
}
}
return null;
}
Now I want to write a method to just search whether the xpath related link is present on webpage or not..
Please help me with that..
The logic is something like
public void Search&ClickLink()
{
List<WebElement> links = driver.findElements(By.tagName("a"));
System.out.println(links.size());
for (WebElement myElement : links){
String link = myElement.getText();
System.out.println(link);
myElement.click();
driver.navigate().back();
}
But I am not sure about it.
Please let me know if the approch is correct also if this function is approprite.
Please suggest the better way to implement the code.
Thanks!!
Well...there are a couple of problems with that last set of code.
The first is that you are going to get StaleElementReferences. When you find an element (or a list of elements), they are pointing to a element to a page. If you refresh the page or leave and come back, they will not be valid, and you have to re-find all of your elements.
Also, many times a link doesn't navigate you to a new page. If this is the case with any of your links, you will suddenly find yourself clicking links on the wrong page (because you navigated back)
Finally, you aren't actually doing anything on the page. For all you know, the link could go to a 500 Error, and Selenium would have no idea.
However, since you have all of the links in a file, why not just read the file, store it in an array, and then do a simple for loop:
for (String linkName: allLinks){
driver.get(urlWithLinks);
driver.findElement(By.linkText(linkName)).click();
...validate the page...
}
Lastly...I personally believe that clicking on all of the links on a page is a terrible test. A much better test would be to go to the link, and the do stuff on the page. That way you are actually testing the functionality of the website.
Related
I'm interested in seeing if I can modify some XMP within an image file. I'm using the following code:
var items = MetadataExtractor.ImageMetadataReader.ReadMetadata(_filename);
foreach (var item in items)
{
if(item.Name == "XMP")
{
var y = new XmpCore.Impl.XmpMeta();
var xmp = item as MetadataExtractor.Formats.Xmp.XmpDirectory;
foreach(var xd in xmp.XmpMeta.Properties)
{
if(xd.Path == "drone-dji:AbsoluteAltitude")
{
var alt = Convert.ToDecimal(xd.Value.Substring(1,xd.Value.Length-1));
alt -= 100;
xmp.XmpMeta.SetProperty(xd.Namespace, xd.Path, alt.ToString());
}
}
xmp.SetXmpMeta(xmp.XmpMeta);
}
}
I know I'm missing something breathtakingly obvious but I don't know this library well enough to figure it out.
No exceptions come up but when I open up the file the XMP field is still the same. When I iterate thru the xmp properties after I set the property it does reflect correctly but when I end the program the file stays the same. I'm sure there's something to do with writing back to the image path but I have no idea where in this library I do that. Any help would be greatly appreciated.
MetadataExtractor doesn't support modifying files. You can update the data structure, as you show, but there's no way to write those changes back to your original file.
I have a scheduled job that loops through all pages of a certain type and creates a block for each page and puts it in a ContentArea.
if (productPageClone.GeneralContentArea == null)
{
productPageClone.GeneralContentArea = new ContentArea();
}
var newBlockForArea = _contentRepository.GetDefault<CrossLinkContainerBlock>
(assetsFolderForPage.ContentLink, productPageClone.Language);
(newBlockForArea as IContent).Name = "newCrossLinkContainer";
var blockReference = _contentRepository.Save((newBlockForArea as IContent), SaveAction.Publish,
AccessLevel.NoAccess);
var newItem = new ContentAreaItem();
newItem.ContentLink = blockReference;
productPageClone.GeneralContentArea.Items.Add(newItem);
When the block is created it is published.
When the page is updated it is either saved or published depending on earlier status.
_contentRepository.Save(productPageClone, SaveAction.ForceCurrentVersion | SaveAction.Publish,
AccessLevel.NoAccess);`
Later when inspecting the page, the block is in the page's assets folder and the block is in the correct ContentArea and it renders correctly. The only problem is that when I edit the block, it says "This item is not used anywhere."
However, then I republish the page the block is in, and then edit the block, it says "Changes made here will affect at least 1 item" as it should.
I am using Episerver 11.11.2.0
I have run the scheduled job manually each time I've tested this.
Has anyone any idea why this is happening?
I found the solution after reading this page:
https://gregwiechec.com/2015/10/reindexing-soft-links/
After page that has the new block has been published, get the page's softLinks and re-index them:
var links = _contentSoftLinkIndexer.GetLinks(productPageClone);
_softLinkRepository.Save(productPageClone.ContentLink.ToReferenceWithoutVersion(),
productPageClone.Language, links, false);
Softlink-tools are imported like this:
private IContentSoftLinkRepository _softLinkRepository =
ServiceLocator.Current.GetInstance<IContentSoftLinkRepository>();
private ContentSoftLinkIndexer _contentSoftLinkIndexer =
ServiceLocator.Current.GetInstance<ContentSoftLinkIndexer>();
This will occur if your content area is null
Try the following
// Before adding the ContentAreaItem
if(productPageClone.GeneralContentArea == null)
{
productPageClone.GeneralContentArea = new ContentArea();
}
productPageClone.GeneralContentArea.Items.Add(newItem);
if(!"".equals(MyFrnds.list_Friends))
{
System.out.println("There is friends list");
Actions action= new Actions(driver);
action.contextClick(MyFrnds.list_Friends).sendKeys(Keys.ARROW_DOWN).sendKeys(Keys.RETURN).build().perform();
System.out.println("My first friend link is opened in new tab and clicking on Show more button if there are more than 12 friends ");
if(!"".equals(MyFrnds.list_Friends))
{
MyFrnds.btn_FrShowmore.click();
}else
{
System.out.println("There are no more than 12 records");
}
}
else
{
System.out.println("There are no friends to display. So clicking on these two links to add friends");
// Right on FITBASE MEMBERS link and open in new tab
Actions action= new Actions(driver);
action.contextClick(MyFrnds.lnk_Fitbasemem).sendKeys(Keys.ARROW_DOWN).sendKeys(Keys.RETURN).build().perform();
// Right on Social network link and open in new tab
action.contextClick(MyFrnds.lnk_Socialnet).sendKeys(Keys.ARROW_DOWN).sendKeys(Keys.RETURN).build().perform();
}
In the above code on very first line I gave If condition as (!"".equals(MyFrnds.list_Friends)) but irrespective of the application it goes to first part of the condition even though it doesn't satisfy the first condition. Hence we get error in executing the script. Can anyone suggest what is wrong in the code.
!"".equals(MyFrnds.list_Friends) is true if MyFrnds.list_Friends is not an empty string. For example if MyFrnds.list_Friends is null, the condition would also return true. You may want to perform an additional null check or simply use StringUtils.isNotBlank(MyFrnds.list_Friends).
Java, check whether a string is not null and not empty?
I am using Selenium with Firefox Webdriver to work with elements on a page that has unique
CSS IDs (on every page load) but the IDs change every time so I am unable to use them to locate an element. This is because the page is a web application built with ExtJS.
I am trying to use Firebug to get the element information.
I need to find a unique xPath or selector so I can select each element individually with Selenium.
When I use Firebug to copy the xPath I get a value like this:
//*[#id="ext-gen1302"]
However, the next time the page is loaded it looks like this:
//*[#id="ext-gen1595"]
On that page every element has this ID format, so the CSS ID can not be used to find the element.
I want to get the xPath that is in terms of its position in the DOM, but Firebug will only return the ID xPath since it is unique for that instance of the page.
/html/body/div[4]/div[3]/div[4]/div/div/div/span[2]/span
How can I get Firebug (or another tool that would work with similar speed) to give me a unique selector that can be used to find the element with Selenium even after the ext-gen ID changes?
Another victim of ExtJS UI automation testing, here are my tips specifically for testing ExtJS. (However, this won't answer the question described in your title)
Tip 1: Don't ever use unreadable XPath like /div[4]/div[3]/div[4]/div/div/div/span[2]/span. One tiny change of source code may lead to DOM structure change. This will cause huge maintenance costs.
Tip 2: Take advantage of meaningful auto-generated partial ids and class names.
For example, this ExtJS grid example: By.cssSelector(".x-grid-view .x-grid-table") would be handy. If there are multiple of grids, try index them or locate the identifiable ancestor, like By.cssSelector("#something-meaningful .x-grid-view .x-grid-table").
Tip 3: Create meaningful class names in the source code. ExtJS provides cls and tdCls for custom class names, so you can add cls:'testing-btn-cancel' in your source code, and get it by By.cssSelector(".testing-btn-cancel").
Tip3 is the best and the final one. If you don't have access the source code, talk to your manager, Selenium UI automation should really be a developer job for someone who can modify the source code, rather than a end-user-like tester.
I would recommend using CSS in this instance by doing By.cssSelector("span[id^='ext-gen'])
The above statement means "select a span element with an id that starts with ext-gen". (If it needs to be more specific, you can reply, and I'll see if I can help you).
Alternatively, if you want to use XPath, look at this answer: Xpath for selecting html id including random number
Although it is not desired in some cases as mentioned above, you can parse through the elements and generate xpath ids.
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
public class XPATHDriverWrapper {
Map xpathIDToWebElementMap = new LinkedHashMap();
Map webElementToXPATHIDMap = new LinkedHashMap();
public XPATHDriverWrapper(WebDriver driver){
WebElement htmlElement = driver.findElement(By.xpath("/html"));
iterateThroughChildren(htmlElement, "/html");
}
private void iterateThroughChildren(WebElement parentElement, String parentXPATH) {
Map siblingCountMap = new LinkedHashMap();
List childrenElements = parentElement.findElements(By.xpath(parentXPATH+"/*"));
for(int i=0;i<childrenElements.size(); i++) {
WebElement childElement = childrenElements.get(i);
String childTag = childElement.getTagName();
String childXPATH = constructXPATH(parentXPATH, siblingCountMap, childTag);
xpathIDToWebElementMap.put(childXPATH, childElement);
webElementToXPATHIDMap.put(childElement, childXPATH);
iterateThroughChildren(childElement, childXPATH);
// System.out.println("childXPATH:"+childXPATH);
}
}
public WebElement findWebElementFromXPATHID(String xpathID) {
return xpathIDToWebElementMap.get(xpathID);
}
public String findXPATHIDFromWebElement(WebElement webElement) {
return webElementToXPATHIDMap.get(webElement);
}
private String constructXPATH(String parentXPATH,
Map siblingCountMap, String childTag) {
Integer count = siblingCountMap.get(childTag);
if(count == null) {
count = 1;
} else {
count = count + 1;
}
siblingCountMap.put(childTag, count);
String childXPATH = parentXPATH + "/" + childTag + "[" + count + "]";
return childXPATH;
}
}
Another wrapper to generate ids from Document is posted at: http://scottizu.wordpress.com/2014/05/12/generating-unique-ids-for-webelements-via-xpath/
How do i extract the data after the class that is "HELP FILE". Here this text HELP FILE is a link when clicked on it leads me to another form.
Can anyone of you please suggest me how to proceed with it. I am a newbie to Selenium, i am struck up over here. I tried by extracting xpath but it gives me the path of my home page
I am using selenium webdriver and eclipse ide. My project supports only IE.
<TD align="left" width="185px" NOWRAP valign="top">
<a class="wlcmhding"><IMG SRC="../../images/image1.jpg" border="0"></a><BR>
HELP FILE<BR>
CODE FILE<BR>
</TD>
Try this code:
// Assume driver in initialized properly
String strText = driver.findElement(By.id("Locator id")).getText();
// Print the text of the Web Element
System.out.println(strText);
This is an answer that will return the WebElement inside the tag that has the same String query (Help File). Hopefully this will help you, not sure i understood the question though.
It strikes me as you want to manipulate the WebElement according the text that is present within, so this method will most likely work for you.
public WebElement getMessage(final WebDriver driver, final String query) {
List<WebElement> wlcmLinks = driver.findElements(By.className("wlcmlink"));
WebElement finalLink = null;
Iterator<WebElement> itr = wlcmLinks.iterator();
while(itr.hasNext() && (finalLink == null)) {
WebElement link = itr.next();
if (link.getText().equals(query)) {
finalLink = link;
}
}
if (finalLink == null) {
throw new NoSuchElementException("No <a /> with that String");
}
return finalLink;
}
Try this instead. It might work..
String helpFile = driver.findElement(By.xpath("//td[1]//a[1]//*#class='wlcmlink']")).getText();
String codeFile = driver.findElement(By.xpath("//td[1]//a[2]//*#class='wlcmlink']")).getText();