this my first question so sorry if the format isn't perfect yet.
I've a small app communicating via an MSMQ, so I decided to make an aspx webpage to monitor the content of that msmq.
I've tested that page in our acceptance server, works perfectly fine.
However, when I test it in our prod server, if the msmq isn't empty, I have an error page saying "SERVER ERROR IN MSMQ MONITORING : Cannot find a formatter capable of reading this message".
Here's the relevant section of the code :
#{
var errorMessage = "";
string queueName = ".\\Private$\\cotfollowupqueue";
System.Messaging.MessageQueue myqueue = new System.Messaging.MessageQueue(#queueName);
System.Messaging.Message[] msgs = new System.Messaging.Message[0];
try {
msgs= myqueue.GetAllMessages();
}
catch (Exception ex)
{
errorMessage = "An error occured : " + ex.Message.ToString();
}
myqueue.Formatter = new System.Messaging.XmlMessageFormatter(new Type[] { typeof(AwaitingOrder_DTO) });
}
#section featured {
<section class="featured">
<div class="content-wrapper">
<hgroup class="title">
<h1>#Page.Title .</h1>< br/>
<h2>Content of the MSMQ</h2>
</hgroup>
<p>This table will show you the content of the MicroSoft Message Queuing used for COT Follow Up.</p>
<p>
<!-- ADD THINGS HERE -->
#errorMessage
<table border="1">
<tr>
<td>COT ID</td>
<td>Row ID</td>
<td>Number of attempts</td>
<td>Next attempt at</td>
<td>Cot Message</td>
<td>Status</td>
<td>Success</td>
</tr>
#foreach (var msg in msgs)
{
myqueue.Formatter = new System.Messaging.XmlMessageFormatter(new Type[] { typeof(AwaitingOrder_DTO) });
var message = (AwaitingOrder_DTO)msg.Body;
<tr>
<td>#message.COTID</td>
<td>#message.rowId</td>
<td>#message.numberOfRetries</td>
<td>#message.nextAttempt</td>
<td>#message.cotMessage</td>
<td>#message.status</td>
<td>#message.success</td>
</tr>
}
</table>
</p>
</div>
</section>
}
The code is a copy pasta from one server to the other, and the deployment was done exactly in the same way. Would anyone know what I can look at to correct this?
I've searched for solutions but : I have a formatter, so it doesn't seem to be the problem.
The code works on another server, so I guess it might be unrelated to the code itself but to environment.
I've checked with "go to definition" where the page got "awaiting order dto" and "queue writer" definition from, and it sends me to a "from metadata" page which makes me wonder if that might be the trouble, but I highly doubt that, since even if the queue writer isn't in the direct metadata, the page is able to send messages to the msmq, just not to read it's content.
Any idea?
(Sorry for long post)
So, I've found the origin of my problems:
--As stated, the f12/see definition option showed me metadata, but I didn't find any real class/code corresponding.
==> This led me to search information about where you put the code in asp.net web app. Answer is : in "app code folder", which is then compiled to make "app code dll". Guess what? You can't just copy paste that file and hope it'll work, apparently.
So I re-took source code, re-compiled, and replaced the "failing" files. Tadaaa, I can monitor my msmq.
(Also had a typo to correct because the page failed to create correct DTO for the transfer, and had to clean the queue multiple times since it's in prod and I can't simply send wrong informations but hey. That's how you learn).
At least now I'm registered on SO.
Related
I'm trying to find out where an extra div is inserted and how I would go about removing it.
In our Index.cshtml file we have this:
<div id="an-id">
#Html.DisplayFor(m => m.BottomArticleListContentArea, new { ViewName = "_ArticleListBlockNoHeader" })
</div>
And in our _ArticleListBlockNoHeader.cshtml view the code looks like this:
#if (Model != null)
{
for (int i = 0; i < Model.Items.Count; i++)
{
var article = Model.Items[i];
<div class="item" data-articleid="#article.ArticleId">
Item #article.ArticleId
</div>
}
}
BottomArticleListContentArea is a ContentArea and is handled by this DisplayTemplate:
#model ContentArea
#if (Model != null)
{
Html.RenderContentArea(Model);
}
As you can see, it doesn't really do anything except check if it's null. Oh... Maybe it does more things!
When I look at the generated html it looks like this:
<div id="an-id">
<div><div>
<div class="item" data-articleid="1">Item 1</div>
<div class="item" data-articleid="2">Item 2</div>
<div class="item" data-articleid="3">Item 3</div>
<div class="item" data-articleid="4">Item 4</div>
<div class="item" data-articleid="5">Item 5</div>
</div></div>
</div>
Notice how on row 2 and 9 there is two divs (<div><div> and </div></div>) that are not in the code I just shared above.
My hypothesis is now that these two divs might be added by some extension method somewhere or some handler or something else. I'm not familiar with the entire codebase and I'm fairly new with ASP.net so my knowledge of the insertion points in ASP.net is very limited.
Where could these two divs come from? And how can I start looking for them?
If something is missing in the question, please let me know, tried to be brief so that it would be easier to read and understand 😊
Thanks in advance!
Edit:
Made a mistake while cleaning the code, added back so that the result is what's generated by the code.
The HTML results above is what is returned and captured using fiddler. No JS has started running while the DOM is recorded.
Added the display template used.
This is the current solution that I have right now and I'm going to try out for a while:
(highly EPiServer specific)
Instead of letting it be rendered using RenderContentArea we're taking control over it and creating it the "raw" way with RenderContentData
foreach (var content in Model.FilteredItems)
{
Html.RenderContentData(content.GetContent(), false, Model.Tag);
}
This gets rid of all the extra divs and make it behave just the way we want it, just the way it seen when we read the code.
Yes, with this we're also loosing the ability to edit the page directly in epi visually. But that's something we need to look into later, for now, it works and I'm happy.
I also wrote this extended version that lets me choose if I want to skip the wrappers or not like this (notice DisableWrapper = true)
#Html.DisplayFor(m => m.ContentArea, new { DisableWrapper = true, ViewName = "_ArticleList" })
That can be done if I instead change the first code snippet to this:
var disableWrapper = Html.ViewContext.ViewData["DisableWrapper"] as bool?;
if (disableWrapper.GetValueOrDefault())
{
foreach (var content in Model.FilteredItems)
{
Html.RenderContentData(content.GetContent(), false, Model.Tag);
}
}
else
{
Html.RenderContentArea(Model);
}
This works and seems to produce the results I'm looking for. As stated previously, I haven't worked in this field for that long so if there's any suggestions or tips, I'd love to hear that and be able to improve my solution both here and in the project.
Thanks for the helpful comments that made me finally arrive at this solution!
I'm having an issue with my php scripts in ASP.NET MVC.
I've deployed to azure services and have double checked that PHP has been enabled.
The php script (upload.php) is:
<?php
if(move_uploaded_file ($_FILES['file'] ['tmp_name'], "Content/{$_FILES['file'] ['name']}")) {
echo "Programme file uploaded successfully. You will be redirected to the content rater in a few seconds..";
header("refresh:3; url=Home/Index");
exit;
}
else {
echo "XML file not uploaded successfully. Unexpected error.";
header("refresh:3; url=Home/Index");
exit;
}?>
I'm attempting to upload the file to the default created folder (in visual studio) 'Content'. I've tried typing the location as (and all have failed):
~/Content/
/Content/
Content/
My form is as follows:
<form action="/Scripts/php/upload.php" method="post" enctype="multipart/form-data">
<label>Select file:</label>
<input type="file" name="file">
<br /><br />
<input type="submit" class="btn btn-success" value="Upload">
</form>
No matter what happens, I'm always taken to the failure message.
I thought my script could be wrong so I've tried the script from W3Schools (the image upload example) and that always fails too.
After some research it seems as though you're unable to upload to your own web directory on Azure - Is this true?
What are my options if this is?
I also have to use php as it's required by a task I'm trying to complete.
After a little fiddling around with your code, I've gotten something that works!
This should work perfectly for you just change the $target_dir. I've tested this on my website. I had the same problems you had with your provided PHP code which leads me to believe that azure is not blocking any type of uploading.
It would be worthwhile doing some checks such as file type, file size and whether the file already exists for security however.
If you are still having problems use var_dump($_FILES); as it simply prints out the variable, this will show whether you actually get the file.
<?php
$target_dir = "Content/";
$target_file = $target_dir . basename($_FILES["file"]["name"]);
if(isset($_POST["submit"]))
{
if (move_uploaded_file($_FILES['file']['tmp_name'], $target_file)) {
// Success
print "Received {$_FILES['file']['name']} - its size is {$_FILES['file']['size']}";
} else {
// Failed
print "Upload failed!";
var_dump($_FILES);
}
}
?>
I'm trying to read the value of a session ID which is served up to a client page (a pin that can then be given to other users who want to join the session), which according to chrome developer tools, is located within this element:
<input type="text" size="18" autocomplete="off" id="idSession" name="idSession" class="lots of stuff here" title="" ">
So far I've been using C# and Xpath to navigate around the site successfully, for testing purposes, but I just can't get hold of the pin that is generated within id="idSession", or by using any other identifier through Xpath. There's a bunch of jquery stuff going on in the background, but neither is it showing up there (again, the code knows about the on-screen locations in the .js files for the ID, but that's it).
I'm new to all of this so would really appreciate a nudge in the right direction, ie. what different tools I need for this, what am I missing etc. what I need to read up on.
Thanks a lot.
What about //input[#id='idSession']/#value to get the content
Also, including a link to a helper library for creating xpath using linq-esq syntax
var xpath = CreateXpath.Where(e => e.TargetElementName == "input" &&
e.Attribute("id").Text == "idSession").Select(e => e.Attribute("value"));
http://unit-testing.net/CurrentArticle/How-to-Create-Xpath-From-Lambda-Expressions.html
I have a web page which returns a set of results which you can then ask for in a .csv format.
As the creation of the file is quite lengthy (at times up to 30 minutes), I have added some JavaScript that adds a class to a div so that it covers the screen, to tell users that the report is being created and to be patient.
After the file has been created and downloaded I would like the div to then return to its original state of not being there (so to speak).
Here is what I currently have.
JavaScript
function skm_LockScreen() {
var lock = document.getElementById('ctl00_ContentPlaceHolder1_skm_LockPane');
var lock2 = document.getElementById('ctl00_ContentPlaceHolder1_pleaseWait');
if (lock)
lock.className = 'LockOn';
if (lock2)
lock2.className = 'WaitingOn';
}
function skm_UnLockScreen() {
var lock = document.getElementById('ctl00_ContentPlaceHolder1_skm_LockPane');
var lock2 = document.getElementById('ctl00_ContentPlaceHolder1_pleaseWait');
if (lock)
lock.className = 'LockOff';
if (lock2)
lock2.className = 'WaitingOff';
}
Button
<asp:Button ID="LatestReportButton" runat="server" CssClass="standardButton" Text="Latest Quote Report" Width="140px"
OnClick="ReportButton_Click" CommandArgument="2" OnClientClick="skm_LockScreen()" />
Code behind
protected void ReportButton_Click(object sender, EventArgs e)
{
Page.ClientScript.RegisterStartupScript(base.GetType(), "unlock", "<script type=\"text/javascript\">skm_UnLockScreen();</script>");
try
{
//Start creating the file
using (StreamWriter sw = new StreamWriter(tempDest + "\\" + csvGuid + ".csv", true))
{
//Code to create the file goes
}
}
catch
{
}
Response.AddHeader("Content-disposition", "attachment; filename=report.csv");
Response.ContentType = "application/octet-stream";
Response.WriteFile("CreatedReport.csv");
Response.End();
}
The issue I'm having is that the JavaScript is never written back to the page because of the Response.End();
I've already done it as two buttons, one to create the report the other to download it, but my company would prefer it to be all in one.
Any suggestions?
Given the length of time needed to generate the report, the method you're proposing contains a number of inherent risks, each of which will cause the report creation to fail:
the user may accidentally close the browser / tab;
the browser / tab might crash;
timeouts may occur on the web server.
I would tackle this problem in a different way. In the first instance, I would look to see if the report generation can be optimized - "30 minutes" rings alarm bells immediately. Assuming database(s) are involved at some point, I would investigate the following as a minimum:
Are database indexes being used, or used correctly?
Do the report generation queries contain inefficient operations (CURSORs, for example)?
Do the execution plans reveal any other flaws in the report generation process?
If this isn't a option (let's assume you can't modify the DBs for whatever reason), I would still consider a totally different approach to generating the report, whereby the report creation logic is moved outside of the web application into, say, a console app.
For this, you would need to define:
a way to pass the report's parameters to the report generation app;
a way to identify the user that requested the report (ideally login credentials, or Windows identity);
a way to notify the user when the report is ready (email, on-screen message);
a directory on the server where the reports will be saved and downloaded from;
a separate page from which the user can download the report.
A couple of database tables should be sufficient to log this information.
A console app in C# that requests four images in a tight loop sometimes returns a previous request. The code is as below and works against any web site, I typically see 3 or 4 errors per run. I developed this code after reports from people browsing a web site I manage where occasionally a jpeg or script would be loaded when the user requested a HTML page.
I don't know if it is a Chrome or ChromeDriver issue. If the previous request was an HTML page then you can end up with getting that instead of the image. Seems to be a race condition.
Has anyone else seen this behaviour and can they repeat it with the code below?
class ContentVerify
{
OpenQA.Selenium.IWebDriver driver;
readonly System.Collections.Generic.List<string> testUrls = new System.Collections.Generic.List<string>()
{
"http://i.imgur.com/zNJvS.jpg",
"http://i.imgur.com/lzVec.jpg",
"http://i.imgur.com/rDuhT.jpg",
"http://i.imgur.com/sZ26q.jpg"
};
public void Check()
{
driver = new OpenQA.Selenium.Chrome.ChromeDriver(); // Both InternetExplorerDriver and FirefoxDriver work OK.
for (int i = 0; i < 10; i++)
{
TestUrls();
}
driver.Quit(); // The driver also crashes on exit, but this seems to be a known bug in Selenium.
}
private void TestUrls()
{
foreach (var item in testUrls)
{
System.Console.WriteLine(item);
//System.Threading.Thread.Sleep(1); // Uncommenting this makes Chrome & ChromeDriver work as expected.
driver.Url = item;
// Requests for images come back as an HTML image tag wrapped in a brief HTML page, like below;
//<html><body style="margin: 0px;"><img style="-webkit-user-select: none" src="http://i.imgur.com/zNJvS.jpg"></body></html>
// So the image should always be in the page, but sometimes (not always) we get the previous image requested.
if (!driver.PageSource.Contains(item))
{
System.Console.ForegroundColor = System.ConsoleColor.Red;
System.Console.WriteLine("Expected: {0}, got: {1}", item, driver.PageSource);
System.Console.ResetColor();
}
}
}
}
It could be that you're not giving the driver enough time to complete the call and have the page load, so it'll "return" whatever previous page it had returned. Have you looked into setting up a timeout/wait on the driver?
EDIT
With regards to the question of why there is this issue in Chrome but not the other browsers, I'd had to venture a guess and say that it probably has to do with how the different browser engines handle displaying an image directly instead of HTML. I make this assumption due to the fact that this discrepancy as described is not seen when running similar code against an HTML page like the Google home page.
Each browser wraps the image in some HTML. For example, IE9 wraps as such:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content="text/html; charset=windows-1252" http-equiv=Content-Type></HEAD>
<BODY><IMG src="[url here]"></BODY></HTML>
Whereas Firefox wraps it like:
<html>
<head>
<meta content="width=device-width; height=device-height;" name="viewport">
<link href="resource://gre/res/TopLevelImageDocument.css" rel="stylesheet">
<title>[filename] (JPEG Image, 500 × 332 pixels)</title>
</head>
<body>
<img alt="[url here]" src="[url here]">
</body>
</html>
And finally, Chrome:
<html>
<body style="margin: 0px;">
<img style="-webkit-user-select: none; " src="[url here]" width="500" height="332">
</body>
<style type="text/css"></style>
</html>
Now, I don't know why the Chrome version causes the webdriver to be unable to detect the pageload. It certainly is the most minimal of the three HTML wrappers, and the w3 validator has a mild panic attack when asked to validate its HTML while the other two validate relatively well.
Also, as mentioned by mootinator, there have been numerous complaints about the Chrome driver in general so it could be just an issue with the Chrome webdriver itself. I just found the above interesting and thought it might be worthwhile to share.
There seem to be a lot of complaints about performance with the Chrome driver.
http://code.google.com/p/selenium/issues/detail?id=1294
Two facts:
1. Chrome itself is not a poorly performing browser.
2. Requests for new URLs are sent asynchronously.
Regardless of what the actual implementation is, it's apparent that the Chrome driver has a performance problem somewhere in the process of making requests and/or updating itself with the results of requests.
The Selenium driver doesn't guarantee that a page will be finished loading before you want to take a peek at it. As such, it can't reasonably be called a bug in the driver if you happen to get a race condition in one of your tests. In order to make reliable selenium tests you need to rely on using, as Roddy indicated, timeout/wait.
I have been using Selenium for sometime now and its always the case where the C# code have finished executing before even the request page was fully loaded, means selenium is very slow in doing its functionality. So in order for selenium to do its stuff we ended using Thread.Sleep and our tests have started working correctly
I agree not the nice way to do it but we have tried various ways and failed to find cleaner solution
Please see link for information Why is Selenium RC so slow? on this same page at the right side their are some related links on other issues related to selenium