C# Selenium always returns the same object - c#

Hey guys i have a problem with selenium what i do is:
navigate to a page with support tickets, switch to the iframe which shows those, find the container of all the ticket items (XPath "//[#id='task_table']/tbody") and then select all the entries in the list of tickets (XPath "//[contains(#id, 'row_task_')]")
now the problem: i'm iterating trough the list of 20 items i got back (tried with foreach loop also) to select subelements of those entries, to get the ticket number for example, which works for the first item, but after that always gives back the same values as in the first item - if i print the innerHTML or Text of the whole element however, i see that the correct element is selected, holding the corresponding values in the element.
can someone tell me why i'm always getting the same values from the fist element in the list?
private void grabTicketData()
{
var docUrl = #"https://it4you.xyz/task_list_do#someParameters";
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.AddArgument("--headless");
IWebDriver driver = new ChromeDriver(chromeOptions); // chromeOptions
driver.Navigate().GoToUrl(docUrl);
driver.Manage().Timeouts().ImplicitWait = new TimeSpan(10000);
System.Threading.Thread.Sleep(3000);
driver.SwitchTo().Frame("gsft_main");
var webAppIframe = driver.FindElement(By.XPath("//*[#id='task_table']/tbody"));
var elements = webAppIframe.FindElements(By.XPath(#"//*[contains(#id, 'row_task_')]"));
var newLstTickets = new ObservableCollection<Ticket>();
for (int i = 0; i <= (elements.Count - 1); i++)
{
Debug.WriteLine(elements[i].Text);
//var itemInnerHtml = elements[i].GetAttribute("innerHTML");
var Id = elements[i].FindElement(By.XPath("//td[3]/a")).Text;
var Prio = elements[i].FindElement(By.XPath("//td[4]")).Text;
var Status = elements[i].FindElement(By.XPath("//td[5]")).Text;
var DelegatedTo = elements[i].FindElement(By.XPath("//td[7]")).Text;
var Subject = elements[i].FindElement(By.XPath("//td[8]")).Text;
var Type = elements[i].FindElement(By.XPath("//td[9]")).Text;
Debug.WriteLine("#####ID:" + Id + " ---- Prio:" + Prio + " -- Status:" + Status + " - DelegatedTo:" + DelegatedTo + " - Subject:" + Subject + " - Type:" + Type);
newLstTickets.Add(new Ticket(Id, Prio, Status, DelegatedTo, Subject, Type));
}
driver.Quit();
}
Thanks in advance! :)

Well, i got it working, kinda confused why this is this way but it works i had expected to get the single elements i've selected back (which is actually true) but for some reason if i navigate the DOM again by XPath, it looks like that navigation is done on the table I've selected before and not in the selected table entry...
The solution is pretty straight forward, just start the DOM navigation one step higher and select the containing element first with XPath..
var Id = elements[i].FindElement(By.XPath("//*[contains(#id, 'row_task_')][" + (i+1).ToString() + "]/td[3]/a")).Text;

Related

C# strange issue - unable to assign value from right to left variable

I have a list Rows which holds 10 different records. I am looping this list in C# console app and inserting values to another list but it only picks first record and inserts it 10 times to new list.
When I debug, unique values are shown in the loop but they are not being assigned to left variable.
List<Job> jobList=new List<Job>();
foreach (var row in rows)
{
Job job = new Job();
job.Title = row.SelectSingleNode("//h2[#class='jobtitle']").ChildNodes[1].Attributes["title"].Value;
job.summary = row.SelectSingleNode("//span[#class='summary']").InnerText
jobList.add(job);
}
Any idea, what is happening?
I also used garbage collector but still no improvement:
job = null;
GC.Collect();
GC.WaitForPendingFinalizers();
Here is updated code after #Andrew suggestion but it didn't work. Right side holds updated values but they are not being assigned to left side variables.
foreach (var row in rows)
{
try
{
var job = new Job();
var title = row.SelectSingleNode("//h2[#class='jobtitle']").ChildNodes[1].Attributes["title"].Value;
var company = row.SelectSingleNode("//span[#class='company']").InnerText.Replace("\n", "").Replace("\r", "");
var location = row.SelectSingleNode("//span[#class='location']").InnerText.Replace("\n", "").Replace("\r", "");
var summary = row.SelectSingleNode("//span[#class='summary']").InnerText.Replace("\n", "").Replace("\r", "");
job.Title = title;
job.Company = company;
job.Location = location;
job.Summary = summary;
jobList.Add(job);
job = null;
GC.Collect();
GC.WaitForPendingFinalizers();
counter++;
Status("Page# " + pageNumber.ToString() + " : Record# " + counter + " extracted");
}
catch (Exception)
{
AppendRecords(jobList);
jobList.Clear();
}
//save file
}
Hi You don't tell us what the rows variable relates to, but I assume these are nodes in a single XmlDocument. The XPath expressions you are using to extract values from these nodes is incorrect, because they will always navigate to the same node in the document irrespective of the current row node.
Here's a simple example that demonstrates the problem:-
static void Main(string[] args)
{
XmlDocument x = new XmlDocument();
x.LoadXml(#"<rows> <row><bla><h2>bob1</h2></bla></row> <row><bla><h2>bob2</h2></bla></row> </rows>");
var rows = x.GetElementsByTagName("row");
foreach (XmlNode row in rows)
{
var h2 = row.SelectSingleNode("//h2").ChildNodes[0].Value;
Console.WriteLine(h2);
}
}
The output from this will be
bob1
bob1
Not what you were expecting? Have a play with the example in Dot Net Fiddle. Take another look at your XPath expression. Your current expression //h2 is saying "give me all h2 elements in the document irrespective of the current node". Whereas .//h2 would give you the h2 elements that are descendants of the current row node, which is probably what you need.

How to Avoid Updating item searched by Dictionary?

In .NET WinForms C# app, I have a Dictionary<string, RefItemDetails> collection named listItems. I store data in it on start of the app. In a static class I have as follows:
// listItems is populated bt reading a file.
// No other operations are performed on it, except finding an item.
public static Dictionary<string, RefItemDetails> itemsList;
// Find RefItemDetails of barcode
public static RefItemDetails FindRefItem(string barcode)
{
RefItemDetails itemDet = null;
try
{
//if (itemsList.TryGetValue(barcode, out itemDet) == false)
// System.Diagnostics.Debug.WriteLine("Barcode " + barcode + " not found in Items");
//itemDet = itemsList.First(item => item.Key == barcode);//.First(item => item.Barcode == barcode);
if (itemsList.ContainsKey(barcode))
itemDet = itemsList[barcode];
}
catch (Exception)
{
itemDet = null;
}
return itemDet;
}
For retrieving an item from listItems in another class, I use :
refScannedItem = null;
// Get Unit Barcode & Search RefItem of it
refScannedItem = UtilityLibrary.FindRefItem(boxItem.UnitBarcode.Trim());
// Display BOX item details
refScannedItem.Barcode = boxItem.BoxBarcode.Trim();
refScannedItem.Description = "BOX of " + boxItem.Quantity + " " + refScannedItem.Description;
refScannedItem.Retail *= boxItem.Quantity;
refScannedItem.CurrentCost *= boxItem.Quantity;
Here what happens above is, I search for an item & I get it "refScannedItem" and I append the Description of it by "BOX of " + boxItem.Quantity + " " + refScannedItem.Description; . So if the original Description was "Aquafina", I make it "BOXof 10 Aquafina". The nest time I scan the same product, I find the product, but now its descrption has become "Box of 10 Aquafina", so my line of setting Description turns to "BOX of 10 BOX of 10 Aquafina". The same goes on like "BOX of 10 BOX of 10 BOX of 10 Aquafina" and so on.
As you cna see in my find code, I had initially used TryGetValue, then tried using LINQ, then tried using ContainsKey, but in all of them why does the value of listItem get updated.
I understand that as TryGetValue has out parameter, so the value is passed as a reference, and then it will be chnaged. But in listItems[key] also updates it !!! How can I avoid this to happen ? I had selected Dictionary collection for easy & fast searching, but this part gives a lot of problems and a big bug too on my app. I couldn't find nay solution where the receive the value but shouldn't be updated. All articles shows how to search & update it.
Kindly suggest a solution for the above. Any help is highly appreciated.
Thanks
You return a pointer to the item contained in your Dictionary, so it makes sense that any manipulations you make to this Object will be stored in the original Dictionary object.
What you want to do is, in FindRefItem, return a pointer to a copy of your RefItemDetails object.
An easy way to do this would be to write a new constructor.
Something like:
public RefItemDetails(RefItemDetails original)
{
this.Barcode = original.Barcode ;
this.Description = original.Description ;
this.Retail = original.Retail ;
this.CurrentCost = original.CurrentCost ;
//Set any other properties here
}
and then replace return itemDet; with return new RefItemDetails(itemDet);
I think you are going about this the wrong way.
You shouldn't have a writeable Description property that you update whenever the quantity changes.
Instead, I think you should have a separate Name property which contains the name of the item (e.g. Aquafina) and a dynamically-created readonly Description property like so:
public string Description
{
get
{
return string.Format("Box of {0} {1}", Quantity, Name);
}
}
Or something along similar lines.
This should do the trick:
if (!refScannedItem.Description.StartsWith("BOX of "))
{
refScannedItem.Description = "BOX of " + boxItem.Quantity + " " + refScannedItem.Description;
}
You are getting an object from a Dictionary - and then changing it, so of course it's properties will change in the Dictionary as well ... you haven't cloned it so why would you be taking a copy?
The solution is quite simply not to change the values of the item and use the amended text in a different way:
var amendedDescription = "BOX of " + boxItem.Quantity + " " + refScannedItem.Description;
Looks like you need to make a copy of the RefItemDetails to make modifications to after you get it back from the call to UtilityLibrary.FindRefItem(boxItem.UnitBarcode.Trim())

SharePoint add item to custom list results in "Invalid URL value. A URL field contains invalid data. Please check the value and try again"

Setup
I am programmatically adding elements to a custom list with custom columns from C# code:
// Get the list
var context = SPContext.Current;
var web = context.Site.RootWeb;
web.AllowUnsafeUpdates = true;
var favoritesList = web.Lists["Favoritter"];
// Check if new item already exists
var query = new SPQuery
{
Query = string.Format(
"<Where>" +
"<And>" +
"<Eq><FieldRef Name='Brugernavn'/><Value Type='Text'>{0}</Value></Eq>" +
"<And>" +
"<Eq><FieldRef Name='Fagomr_x00e5_de'/><Value Type='Text'>{1}</Value></Eq>" +
"<Eq><FieldRef Name='N_x00f8_gletalsnummer'/><Value Type='Text'>{2}</Value></Eq>" +
"</And>" +
"</And>" +
"</Where>", GetUserName(false), omraade, noegletalsId)
};
var items = favoritesList.GetItems(query);
if (items.Count > 0)
return false;
// Otherwise add the new item
var favorite = favoritesList.Items.Add();
favorite["Brugernavn"] = GetUserName(false);
favorite["Fagomr_x00e5_de"] = omraade;
favorite["N_x00f8_gletalsnummer"] = noegletalsId;
favorite.Update(); // <--- THIS LINE THROWS EXCEPTION
web.AllowUnsafeUpdates = false;
return true;
The problem
When I perform the Update() command on the new item the following exception is thrown:
Microsoft.SharePoint.SPException:
Invalid URL value. A URL field contains invalid data. Please check the value and try again
Additional information
The three custom columns I have made are all of type SPFieldText and thus have nothing to do with URLs.
I have also hidden the default Title field using PowerShell:
$titleField = $favoritesList.Fields.GetField("Title")
$titleField.LinkToItem = $false
$titleField.Required = $false
$titleField.Hidden = $true
$titleField.Update()
The XML schema for list can be found here.
How you checked whether your list variable (favoritesList) contains reference to the list that you are trying to access.
It would be better to use SPWeb.Lists.TrygetList (http://msdn.microsoft.com/library/office/microsoft.sharepoint.splistcollection.trygetlist.aspx) instead of using web.Lists[]. TrygetList will return null if the specific list could not be found, so that you can make sure favoritesList variable has reference to the list library.
Also you can use SPWeb.getlist (http://msdn.microsoft.com/library/office/microsoft.sharepoint.spweb.getlist.aspx) and check for execption.
Hope this helps.
First of all, you shouldn't use SPList.Items collection due to performance reasons. Use SPList.AddItem method instead.
Second, the problem is in arguments of Add method. First argument should be the server-relative URL of the folder where the list item should be created. You need to get folder from list and then get it's url.
Or maybe you can use just favoritesList.AddItem()

Extracting table using Htmlagilitypack + LINQ + Lambda

I'm having some difficulties using a lambda expression to parse an html table.
var cells = htmlDoc.DocumentNode
.SelectNodes("//table[#class='data stats']/tbody/tr")
.Select(node => new { playerRank = node.InnerText.Trim()})
.ToList();
foreach (var cell in cells)
{
Console.WriteLine("Rank: " + cell.playerRank);
Console.WriteLine();
}
I'd like to continue to use the syntax as
.Select(node => new { playerRank = node.InnerText.Trim()
but for the other categories of the table such as player name, team, position etc. I'm using Xpath, so I am unsure if its correct.
I'm having an issue finding out how to extract the link + player name from:
Steven Stamkos
The Xpath for it is:
//*[#id="fullPage"]/div[3]/table/tbody/tr[1]/td[2]/a
Can anyone help out?
EDIT* added HTML page.
http://www.nhl.com/ice/playerstats.htm?navid=nav-sts-indiv#
This should get you started:
var result = (from row in doc.DocumentNode.SelectNodes("//table[#class='data stats']/tbody/tr")
select new
{
PlayerName = row.ChildNodes[1].InnerText.Trim(),
Team = row.ChildNodes[2].InnerText.Trim(),
Position = row.ChildNodes[3].InnerText.Trim()
}).ToList();
The ChildNodes property contains all the cells per row. The index with determine which cell you get.
To get the url from the anchor tag contained in the player name cell:
var result = (from row in doc.DocumentNode.SelectNodes("//table[#class='data stats']/tbody/tr")
select new
{
PlayerName = row.ChildNodes[1].InnerText.Trim(),
PlayerUrl = row.ChildNodes[1].ChildNodes[0].Attributes["href"].Value,
Team = row.ChildNodes[2].InnerText.Trim(),
Position = row.ChildNodes[3].InnerText.Trim()
}).ToList();
The Attributes collection is a list of the attributes in an HTML element. We are simply grabbing the value of href.

Iterate all 'select' elements and get all their values in Selenium

I have the following code in C# using selenium:
private void SelectElementFromList(string label)
{
var xpathcount = selenium.GetXpathCount("//select");
for (int i = 1; i <= xpathcount; ++i)
{
string[] options;
try
{
options = selenium.GetSelectOptions("//select["+i+"]");
}
catch
{
continue;
}
foreach (string option in options)
{
if (option == label)
{
selenium.Select("//select[" + i + "]", "label=" + label);
return;
}
}
}
}
The problem is the line:
options = selenium.GetSelectOptions("//select["+i+"]");
When i == 1 this works, but when i > 1 the method return null ("ERROR: Element //select[2] not found"). It works only when i == 1.
I have also tried this code in JS:
var element = document.evaluate("//select[1]/option[1]/#value", document, null, XPathResult.ANY_TYPE, null);
alert(element.iterateNext());
var element = document.evaluate("//select[2]/option[1]/#value", document, null, XPathResult.ANY_TYPE, null);
alert(element.iterateNext());
Which print on the screen "[object Attr]" and then "null".
What am I doing wrong?
My goal is to iterate all "select" elements on the page and find the one with the specified label and select it.
This is the second most FAQ in XPath (the first being unprefixed names and default namespace.
In your code:
options = selenium.GetSelectOptions("//select["+i+"]");
An expression of the type is evaluated:
//select[position() =$someIndex]
which is a synonym for:
//select[$someIndex]
when it is known that $someIndex has an integer value.
However, by definition of the // XPath pseudo-operator,
//select[$k]
when $k is integer, means:
"Select all select elements in the document that are the $k-th select child of their parent."
When i == 1 this works, but when i > 1 the method return null ("ERROR:
Element //select[2] not found"). It works only when i == 1.
This simply means that in the XML document there is no element that has more than one select child.
This is a rule to remember: The [] XPath operator has higher precedence (priority) than the // pseudo-operator.
The solution: As always when we need to override the default precedence of operators, we must use brackets.
Change:
options = selenium.GetSelectOptions("//select["+i+"]");
to:
options = selenium.GetSelectOptions("(//select)["+i+"]");
Finally I've found a solution.
I've just replaced these lines
options = selenium.GetSelectOptions("//select["+i+"]");
selenium.Select("//select["+i+"]", "label="+label);
with these
options = selenium.GetSelectOptions("//descendant::select[" + i + "]");
selenium.Select("//descendant::select[" + i + "]", "label=" + label);
The above solution options = selenium.GetSelectOptions("(//select)["+i+"]"); doesn't worked for me but i tried to use css selectors.
I want to get username and password text box. I tried with css=input this gave me Username text box and when used css=input+input this gave me Password textbox.
along with this selectors you can use many things in combination.
here is the link from where i read.
I think this will help u to achieve your target.
Regards.

Categories