I'm trying to stream my own online radio on my site. I'm right now focusing on simply using an http handler and html5 to do so.
How can I keep IISHandler constantly running without ever ending for each user? Like a real online radio.
You have directly redirect to any url inside handler provided that url has audio content, you are are intended to put handler to <audio> src. So you can try this.
public void ProcessRequest(HttpContext context)
{
//context.Response.Redirect("http://api.soundcloud.com/tracks/148976759/stream?client_id=201b55a1a16e7c0a122d112590b32e4a");
// you can use above or else below
byte[] content = null;
string fileName = context.Server.MapPath(#"\mp3\") + context.Request["file_name"];
if (File.Exists(fileName))
{
using (FileStream stream = new FileStream(fileName, FileMode.Open))
{
content = new byte[System.Convert.ToInt32(stream.Length)];
stream.Read(content, 0, System.Convert.ToInt32(stream.Length));
context.Response.ContentType = "audio/mp3";
context.Response.OutputStream.Write(content, 0, content.Length);
}
}
}
<audio src="Handler1.ashx" controls></audio>
Regarding IISHandler constantly running? -- When you host this in IIS, any request to handler ( i.e. you will have a html page having audio's src= handler, so end-user hits that html page, handler will be invoked), it will process the request. The similar concept like aspx . They are hosted in IIS, any request to that will be process. Do you get my point here?
Related
I recently developed a .NET Web App that downloaded zip files from a certain, set location on our network. I did this by retrieving the content stream and then passing it back to the View by returning the File().
Code from the .NET Web App who's behavior I want to emulate:
public async Task<ActionResult> Download()
{
try
{
HttpContent content = plan.response.Content;
var contentStream = await content.ReadAsStreamAsync(); // get the actual content stream
if (plan.contentType.StartsWith("image") || plan.contentType.Contains("pdf"))
return File(contentStream, plan.contentType);
return File(contentStream, plan.contentType, plan.PlanFileName);
}
catch (Exception e)
{
return Json(new { success = false });
}
}
plan.response is constructed in a separate method then stored as a Session variable so that it is specific to the user then accessed here for download.
I am now working on a Windows Forms Application that needs to be able to access and download these files from the same location. I am able to retrieve the response content, but I do not know how to proceed in order to download the zip within a Windows Forms Application.
Is there a way, from receiving the content stream, that I can download this file using a similar method within a Windows Form App? It would be convenient as accessing the files initially requires logging in and authenticating the user and thus can not be accessed normally with just a filepath.
Well, depending on what you're trying to accomplish, here's a pretty simplistic example of download a file from a URL and saving it locally:
string href = "https://www.learningcontainer.com/wp-content/uploads/2020/05/sample-zip-file.zip";
WebRequest request = WebRequest.Create(href);
using (WebResponse response = request.GetResponse())
{
using (Stream dataStream = response.GetResponseStream())
{
Uri uri = new Uri(href);
string fileName = Path.GetTempPath() + Path.GetFileName(uri.LocalPath);
using (FileStream fs = new FileStream(fileName, FileMode.OpenOrCreate))
{
dataStream.CopyTo(fs);
}
}
}
I created an Ashx handler in C# that serves me up images based on a fileid parameter that gets passed on to me. I also have a simple tooltip preview script that I wrote, which is not working. You can see the image loading, but then after it loads, the image just vanishes.
I suspect the issue is in the ASHX handler because if I use a static image, it works just fine. Here is my ASHX handler code:
public void ProcessRequest(HttpContext context)
{
string fileId = HttpUtility.UrlDecode(context.Request.QueryString["fileId"] ?? "") ?? "";
string fullFileName = context.Server.MapPath("~/Uploads") + "\\" + fileId;
using (FileStream s = File.Open(fullFileName, FileMode.Open, FileAccess.Read, FileShare.Read))
{
context.Response.ContentType = HelperClasses.Utility.GetMimeTypeFromMagic(fullFileName);
var buffer = new byte[s.Length];
s.Read(buffer, 0, (int) s.Length);
context.Response.BinaryWrite(buffer);
context.Response.Write(buffer);
s.Close();
}
context.Response.Flush();
context.Response.Close();
}
In addition, I've created a fiddle to demonstrate the issue.
In your code, the line
context.Response.Close();
is the issue. The close method abruptly ends the response stream, see details here and also check this related question IIS & Chrome: failed to load resource: net::ERR_INCOMPLETE_CHUNKED_ENCODING
Replace the line with context.Response.End(); to normally end the response.
You are throwing garbage at the end of the response, specifically because you are calling Response.Write in addition to BinaryWrite. If you look at the response of your handler, this is at the end (literally):
System.Byte[]
Obviously that isn't part of the image. This line should be removed:
context.Response.Write(buffer);
I would also avoid doing anything like Response.End and Response.Close. Let the ASP.NET runtime take care of that.
Better yet, if you are using the .NET Framework 4 or older, you can simplify the whole thing to this:
s.CopyTo(context.Response.OutputStream);
I am writing an webscraper, to do the download content from a website.
Traversing to the website/URL, triggers the creation of a temporary URL. This new URL has a zipped text file. This zipped file has to be downloaded and parsed.
I have written a scraper in C# using WebClient and its function DownloadFileAsync(). The zipped file is read from the designated location on a trapped DownloadFileCompleted event.
My issue is The Windows Open/Save dialog are triggered. This requires user input and the automation is disrupted.
Can you suggest a way to bypass the issue ? I am cool with rewriting the code using any alternate libraries. :)
Thanks for reading
You can use 'HttpWebRequest' to perform the request and save the streamed bytes to disk.
var request = WebRequest.Create(#"your url here");
request.Method=WebRequestMethods.Http.Get;
var response = request.GetResponse();
using (var writeStream = new FileStream(#"path", FileMode.Create))
{
using (var readStream = response.GetResponseStream())
{
var buffer = new byte[1024];
var readCount = readStream.Read(buffer,0,buffer.Length);
while (readCount > 0)
{
writeStream.Write(buffer,0,buffer.Length);
readCount= readStream.Read(buffer,0,buffer.Length);
}
}
}
I'm trying to obtain an image to encode to a WordML document. The original version of this function used files, but I needed to change it to get images created on the fly with an aspx page. I've adapted the code to use HttpWebRequest instead of a WebClient. The problem is that I don't think the page request is getting resolved and so the image stream is invalid, generating the error "parameter is not valid" when I invoke Image.FromStream.
public string RenderCitationTableImage(string citation_table_id)
{
string image_content = "";
string _strBaseURL = String.Format("http://{0}",
HttpContext.Current.Request.Url.GetComponents(UriComponents.HostAndPort, UriFormat.Unescaped));
string _strPageURL = String.Format("{0}{1}", _strBaseURL,
ResolveUrl("~/Publication/render_citation_chart.aspx"));
string _staticURL = String.Format("{0}{1}", _strBaseURL,
ResolveUrl("~/Images/table.gif"));
string _fullURL = String.Format("{0}?publication_id={1}&citation_table_layout_id={2}",
_strPageURL, publication_id, citation_table_id);
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(_fullURL);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream image_stream = response.GetResponseStream();
// Read the image data
MemoryStream ms = new MemoryStream();
int num_read;
byte[] crlf = System.Text.Encoding.Default.GetBytes("\r\n");
byte[] buffer = new byte[1024];
for (num_read = image_stream.Read(buffer, 0, 1024); num_read > 0; num_read = image_stream.Read(buffer, 0, 1024))
{
ms.Write(buffer, 0, num_read);
}
// Base 64 Encode the image data
byte[] image_bytes = ms.ToArray();
string encodedImage = Convert.ToBase64String(image_bytes);
ms.Position = 0;
System.Drawing.Image image_original = System.Drawing.Image.FromStream(ms); // <---error here: parameter is not valid
image_stream.Close();
image_content = string.Format("<w:p>{4}<w:r><w:pict><w:binData w:name=\"wordml://{0}\">{1}</w:binData>" +
"<v:shape style=\"width:{2}px;height:{3}px\">" +
"<v:imagedata src=\"wordml://{0}\"/>" +
"</v:shape>" +
"</w:pict></w:r></w:p>", _word_image_id, encodedImage, 800, 400, alignment.center);
image_content = "<w:br w:type=\"text-wrapping\"/>" + image_content + "<w:br w:type=\"text-wrapping\"/>";
}
catch (Exception ex)
{
return ex.ToString();
}
return image_content;
Using a static URI it works fine. If I replace "staticURL" with "fullURL" in the WebRequest.Create method I get the error. Any ideas as to why the page request doesn't fully resolve?
And yes, the full URL resolves fine and shows an image if I post it in the address bar.
UPDATE:
Just read your updated question. Since you're running into login issues, try doing this before you execute the request:
request.Credentials = CredentialCache.DefaultCredentials
If this doesn't work, then perhaps the problem is that authentication is not being enforced on static files, but is being enforced on dynamic files. In this case, you'll need to log in first (using your client code) and retain the login cookie (using HttpWebRequest.CookieContainer on the login request as well as on the second request) or turn off authentication on the page you're trying to access.
ORIGINAL:
Since it works with one HTTP URL and doesn't work with another, the place to start diagnosing this is figuring out what's different between the two requests, at the HTTP level, which accounts for the difference in behavior in your code.
To figure out the difference, I'd use Fiddler (http://fiddlertool.com) to compare the two requests. Compare the HTTP headers. Are they the same? In particular, are they the same HTTP content type? If not, that's likely the source of your problem.
If headers are the same, make sure both the static and dynamic image are exactly the same content and file type on the server. (e.g. use File...Save As to save the image in a browser to your disk). Then use Fiddler's Hex View to compare the image content. Can you see any obvious differences?
Finally, I'm sure you've already checked this, but just making sure: /Publication/render_citation_chart.aspx refers to an actual image file, not an HTML wrapper around an IMG element, right? This would account for the behavior you're seeing, where a browser renders the image OK but your code doesn't.
Here's the purpose of my console program: Make a web request > Save results from web request > Use QueryString to get next page from web request > Save those results > Use QueryString to get next page from web request, etc.
So here's some pseudocode for how I set the code up.
for (int i = 0; i < 3; i++)
{
strPageNo = Convert.ToString(i);
//creates the url I want, with incrementing pages
strURL = "http://www.website.com/results.aspx?page=" + strPageNo;
//makes the web request
wrGETURL = WebRequest.Create(strURL);
//gets the web page for me
objStream = wrGETURL.GetResponse().GetResponseStream();
//for reading web page
objReader = new StreamReader(objStream);
//--------
// -snip- code that saves it to file, etc.
//--------
objStream.Close();
objReader.Close();
//so the server doesn't get hammered
System.Threading.Thread.Sleep(1000);
}
Pretty simple, right? The problem is, even though it increments the page number to get a different web page, I'm getting the exact same results page each time the loop runs.
i IS incrementing correctly, and I can cut/paste the url strURL creates into a web browser and it works just fine.
I can manually type in &page=1, &page=2, &page=3, and it'll return the correct pages. Somehow putting the increment in there screws it up.
Does it have anything to do with sessions, or what? I make sure I close both the stream and the reader before it loops again...
Have you tried creating a new WebRequest object for each time during the loop, it could be the Create() method isn't adequately flushing out all of its old data.
Another thing to check is that the ResponseStream is adequately flushed out before the next loop iteration.
This code works fine for me:
var urls = new [] { "http://www.google.com", "http://www.yahoo.com", "http://www.live.com" };
foreach (var url in urls)
{
WebRequest request = WebRequest.Create(url);
using (Stream responseStream = request.GetResponse().GetResponseStream())
using (Stream outputStream = new FileStream("file" + DateTime.Now.Ticks.ToString(), FileMode.Create, FileAccess.Write, FileShare.None))
{
const int chunkSize = 1024;
byte[] buffer = new byte[chunkSize];
int bytesRead;
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
byte[] actual = new byte[bytesRead];
Buffer.BlockCopy(buffer, 0, actual, 0, bytesRead);
outputStream.Write(actual, 0, actual.Length);
}
}
Thread.Sleep(1000);
}
Just a suggestion, try disposing the Stream, and the Reader. I've seen some weird cases where not disposing objects like these and using them in loops can yield some wacky results....
That URL doesn't quite make sense to me unless you are using MVC or something that can interpret the querystring correctly.
http://www.website.com/results.aspx&page=
should be:
http://www.website.com/results.aspx?page=
Some browsers will accept poorly formed URLs and render them fine. Others may not which may be the problem with your console app.
Here's my terrible, hack-ish, workaround solution:
Make another console app that calls THIS one, in which the first console app passes an argument at the end of strURL. It works, but I feel so dirty.