I am sure this must have been answered before but I cannot find a solution, so I figure I am likely misunderstanding other people's solutions or trying to do something daft, but here we go.
I am writing an add-in for Outlook 2010 in C# where a user can click a button in the ribbon and submit the email contents to a web site. When they click the button the website should open in the default browser, thus allowing them to review what has just been submitted and interact with it on the website. I am able to do this using query strings in the URL using:
System.Diagnostics.Process.Start("http://www.test.com?something=value");
but the limit on the amount of data that can be submitted and the messy URLs are preventing me from following through with this approach. I would like to use an HTTP POST for this as it is obviously more suitable. However, the methods I have found for doing this do not seem to open the page up in the browser after submitting the post data:
http://msdn.microsoft.com/en-us/library/debx8sh9.aspx
to summarise; the user needs to be able to click the button in the Outlook ribbon, have the web browser open and display the contents of the email which have been submitted via POST.
EDIT:
Right, I found a way to do it, its pretty fugly but it works! Simply create a temporary .html file (that is then launched as above) containing a form with hidden fields for all the data, and have it submitted on page load with JavaScript.
I don't really like this solution as it relies on JavaScript (I have a <noscript> submit button just in case) and seems like a bit of a bodge, so I am still really hoping someone on here will come up with something better.
This is eight years late, but here's some code that illustrates the process pretty well:
string tempHTMLLocation = "some_arbitrary_location" + "/temp.html";
string url = https://your_desired_url.com";
// create the temporary html file
using (FileStream fs = new FileStream(tempHTMLLocation, FileMode.Create)) {
using (StreamWriter w = new StreamWriter(fs, Encoding.UTF8)) {
w.WriteLine("<body onload=\"goToLink()\">");
w.WriteLine("<form id=\"form\" method=\"POST\" action=\"" + url + "\">");
w.WriteLine("<input type=\"hidden\" name=\"post1\" value=\"" + post_data1 + "\">");
w.WriteLine("<input type=\"hidden\" name=\"post2\" value=\"" + post_data2 + "\">");
w.WriteLine("</form>");
w.WriteLine("<script> function goToLink() { document.getElementById(\"form\").submit(); } </script>");
w.WriteLine("</body>");
}
}
// launch the temp html file
var launchProcess = new ProcessStartInfo {
FileName = tempHTMLLocation,
UseShellExecute = true
};
Process.Start(launchProcess);
// delete temp file but add delay so that Process has time to open file
Task.Delay(1500).ContinueWith(t=> File.Delete(tempHTMLLocation));
Upon opening the page, the onload() JS script immediately submits the form, which posts the data to the url and opens it in the default browser.
The Dropbox client does it the same ways as you mentioned in your EDIT. But it also does some obfuscation, i.e. it XORs the data with the hash submitted via the URL.
Here are the steps how Dropbox does it:
in-app: Create a token that can be used to authorize at dropbox.com.
in-app: Convert token to hex string (A).
in-app: Create a secure random hex string (B) of the same length.
in-app: Calculate C = A XOr B.
in-app: Create temporary HTML file with the following functionality:
A hidden input field which contains value B.
A submit form with hidden input fields necessary for login to dropbox.com.
A JS function that reads the hash from URI, XORs it with B and writes the result to the submit forms hidden fields.
Delete hash from URI.
Submit form.
in-app: Open the temporary HTML file with the standard browser and add C as hash to the end of the URI.
Now if your browser opens the HTML file it calculates the auth token from the hidden input field and the hash in the URI and opens dropbox.com. And because of Point 5.4. you are not able to hit the back button in your browser to login again because the hash is gone.
I'm not sure I would have constructed the solution that way. Instead, I would post all the data to a web service (using HttpWebRequest, as #Loci described, or just importing the service using Visual Studio), which would store the data in a database (perhaps with a pending status). Then direct the user (using your Process.Start approach) to a page that would display the pending help ticket, which would allow them to either approve or discard the ticket.
It sounds like a bit more work, but it should clean up the architecture of what you are trying to do. Plus you have the added benefit of not worrying about how to trigger a form post from the client side.
Edit:
A plain ASMX web service should at least get you started. You can right-click on your project and select Add Service Reference to generate the proxy code for calling the service.
Related
i'm trying to log in a site with username + password through a c# code.
i found out that it uses Ajax to authenticate...
how should i implement such login ?
the elements in the web page doesn't seem to have an "id"...
i tried to implement it using HtmlAgilityPack but i don't think this is the correct direction...
i can't simulate a click button since i don't find "id" for the button.
if (tableNode.Attributes["class"].Value == "loginTable")
{
var userInputNode =
tableNode.SelectSingleNode("//input[#data-logon-popup-form-user-name-input='true']");
var passwordInputNode =
tableNode.SelectSingleNode("//input[#data-logon-popup-form-password-input='true']");
userInputNode.SetAttributeValue("value", "myemail#gmail.com");
passwordInputNode.SetAttributeValue("value", "mypassword");
var loginButton = tableNode.SelectSingleNode("//div[#data-logon-popup-form-submit-btn='true']");
}
This question is quite broad but I'll help you in the general direction:
Use Chrome DevTools (F12) => Network tab => Check the "Preserve Log". An alternative could be Fiddler2
Login manually and look at the request the AJAX sends. Save the endpoint (the URL) and save the Body of the request (the Json data that's in the request with username and password)
Do the post directly in your C# code and forget about HtmlAgilityPack unless you need to actually get some dynamic data from the page, but that's rarely the case
Login with something like this code snippet: POSTing JSON to URL via WebClient in C#
Now you're logged in. You usually receive some data from the server when you're logging in, so save it and use it for whatever you want to do next. I'm guessing it might have some SessionId or some authentication token that your future requests will need as a parameter to prove that you're actually logged in.
So I have a single form on a page. There are several text input fields and such. Right now there is also a jQuery file upload control wherein you can select several files. The problem I have right now is that I'm requiring that the user upload the files first (using the jQuery control) and then I save those files in Session state until the regular form posts the rest of the form fields. Then I pull the previously uploaded files from Session and do what I need to do.
So basically to fill out the form requires two separate POST operations back to the server. The files, then the remaining form fields.
I'm thinking there must be a better way to let a user select his/her files yet not post anything until the user submits the main form to post all the other fields. I've read several posts on this site, but I can't find one that addresses this particular issue.
Any suggestions/assistance is greatly appreciated.
I believe you can do this using Uploadify. There are two options you'd want to look at. First, set auto to false to prevent selected files from immediately being loaded. Second, use the formdata option to send along your other form fields along with the payload.
You'd then call the upload method when the user submits the form, uploading each file in the queue and sending the form data all at once.
Server Side Part:
You'll probably be submitting the form to an ASPX file or ASHX handler. I prefer using an ASHX handler since they're more light-weight. Both will allow you access to the HttpContext or the HttpRequest object. First, you'll need to check context.Request.Files.Count to make sure files were posted:
if (context.Request.Files.Count > 0) // We have files uploaded
{
var file = context.Request.Files[0]; // First file, but there could be others
// You can call file.SaveAs() to save the file, or file.InputString to access a stream
}
Obtaining the other form fields should be just as easy:
var formfield = context.Request["MyFormField"]; // Some form field
You can also write results back to the client, such as a JSON encoded description of any resulting errors:
context.Response.Write(response); // Anything you write here gets passed in to onUploadSuccess
I think that should get you started anyway!
Hello Friends...
I have a mvc project and a form with a attachment box (such as yahoo mail compose)
for example "create_request.cshtml"
I want:
each user fill the fields and upload his/her files (i post each file by Ajax when the user select any one) and after submit the form, if the page has error (checked server side), user see the uploaded files in the response page (response form with highlighted errors)...
I implemented the above scenario very nice:
(Ajax+ Tempdata+ save the files server side befor submit + Thumbs of uploaded files befor submit the form)
in my controller:
public void KeepTempData(string name, string value)
{
TempData[name] = value;
}
in my view i send each file name to the server after i upload it by another ajax codes:
ajaxPostData(KeepTempData, "Attachments", $('#Attachments').val());
But i have a problem:
Because i used TempData for keeping list of uploaded filenames, if the user attach a file in the current page then opens a new Tab in her/his browser and goes to the "create_request" page address. He/She see an empty form with a attached file...
My Solution:
Maybe I can solve this problem with a unique key for each page (each form) and keeping it in a session variable and a hidden field for each page request>> using: "Anti Forgery Token with a Salt" or "DateTime.Now"
I found This post on the web. its problem is like as my problem. and its solution is like as my solution.
What is appropriate solutions in MVC for this problem???
What is your recommendation for using TempData's (or Session's) without any conflict when user have different requests of a pages in some tabs of a browser (like firfox)???
TempData has very short lifespan. You should use Session instead of TempData.
Infact session object is the backing storage behind TempData. But the data stored in TempData is available to the current request and Subsequent request.
I Solved this problem by:
Get a key from querystring
If the key is empty
I generate a new and unique key in the action of controller
Redirect the page to a new url with the key (as a querystring key-value)
Else
Adding the key to the ViewBag (for set a hidden field in view)
Save the key in a session variable.
.
public virtual ActionResult Create(string attkey)
{
if (string.IsNullOrEmpty(attkey))
{
attkey = generatNewNameForSession('key'); // for examle: kye_jhtyujbvkjadsgfvn
Response.Redirect("myControle/Create, attkey="+attkey), true);
}
ViewBag.AttachmentsKey = attkey;
if (Session[attkey] == null)
KeepData(attkey, "");
.
.
.
Now each instance of my page has a Identification and now i can decide shown or not shown the previously attached files.
The Aim is to retrieve data from a website after it has finished its Ajax calls.
Currently the data is being retrieved when the page first loads. But the required data is found inside a div which is loaded after an ajax call.
To summarize , the Scenario is as follows:
A webpage is called with some parameters passed inside C# code (currently using CsQuery for c#). when the request is sent, the page opens and a "Loading" picture shows and after few seconds the Required data is retrieved. The cSQuery code however retrieves the first Page contents with the "Loading" picture ..
the code is as follows
UrlBuilder ub = new UrlBuilder("<url>")
.AddQuery("departure", "KHI")
.AddQuery("arrival", "DXB")
.AddQuery("queryDate", "2013-03-28")
.AddQuery("queryType", "D");
CQ dom = CQ.CreateFromUrl(ub.ToString());
CQ availableFlights = dom.Select("div#availFlightsDiv");
string RenderedDiv = availableFlights["#availFlightsDiv"].RenderSelection();
When you "scrape" a site you are making a call to the web server and you get what it serves up. If the DOM of the target site is modified by javascript (ajax or otherwise) you are never going to get that content unless you load it into some kind of browser engine on the machine that is doing the scraping, that is capable of executing the javascript calls.
Almost a year old question, you might have got your answer already. But would like mention this awesome project here - SimpleBrowser.
https://github.com/axefrog/SimpleBrowser
It keeps your DOM updated.
I have .NET MVC web application. On my page there is a form to choose what pdf docs to display. I want to open pdf files in a new window or tab. The user can choose to display one or two pdf files. My form posts the data to controller, but i dont know how to return two pdfs from my controller and display in separate window/tab.
Does anyone have an idea how this can be done?
You can let the model write the urls to the documents into a javascript code block
#if(model.ShowPDFs)
{
<script>
function ShowPDF()
{
window.open('#model.PdfUrl1');
#if(model.Open2Pdf)
{
window.open('#model.PdfUrl2');
}
}
// opens the document after 3 seconds after the page has loaded
setTimeOut("ShowPDF()", 3000);
</script>
}
I made something similar (but I build the pdf server-side using ReportViewer) in this way:
my form post data to the controller action (with ajax)
the controller action reads the posted data, query the database
accordingly to it, and decide how many pdfs I have to return;
the controller action saves in the session, with a different key for every pdf (determined by my logic), the data to pass to ReportViewer;
the controller action returns (to the callback of the ajax call) an array with all the key used to store data in the sessions;
client side, the js callback loop over the returned array and, for every item, call (it opens a link in a different tab) a different controller (whose only responsibility is to send pdf to the request) passing it, in the query string, the key for that pdf;
the PrintController read the data from the session (using the key received), build the report and send it in the response.
I think you could do something similar; I don't understand how your pdf are built (are they data-depending or pdf pre-existing on the server?), but you can save the pdf stream, or the pdf path in the session instead of the data like me.
Hope to help; if you think that my solution can work for you and you need some code I can try to extract some from my codebase (in my case there are other issues and I have to rewrite the code if you need it ...).