I am using Telerik Kendo File Upload for uploading folder.
In Production environment, few users are complaining issue with Folder Upload, during upload few files get errored out, using Developer tool in the console tab it logs "ERR_HTTP2_PROTOCOL_ERROR" error as attached for the failed files.
When i am trying i am not getting this error and all folders are getting uploaded properly. I asked user to share the files for which they are facing error and when i tried it uploaded successfully. When user tried again uploading same files which errored out it got succeeded today which were failing yesterday but sill there are files which is giving the same error.
I went through a post where it say the problem could be due to use of HTTP/2 and when they switched to HTTP /1.1 it worked fine. We are also using HTTP/2 but we don't have option of going back to HTTP/1.1. Link below :
https://www.telerik.com/forums/problems-with-multi-file-upload-and-http-2
Any suggestions ?
This is because on your clients machine http/2 is not enabled thus the error prompts.
If you look in your local machine you will see that under your server, you have Https protocol enabled and a valid certificate.
Your clients either lack a valid certificate on the server or are using the site through Http protocol.
you can learn more here:
Http/2 explanation
SETTINGS_MAX_CONCURRENT_STREAMS (0x3):
Indicates the maximum number of concurrent streams that the sender will allow. This limit is directional: it applies to the number of streams that the sender permits the receiver to create. Initially, there is no limit to this value. It is recommended that this value be no smaller than 100, so as to not unnecessarily limit parallelism.
A value of 0 for SETTINGS_MAX_CONCURRENT_STREAMS SHOULD NOT be treated as special by endpoints. A zero value does prevent the creation of new streams; however, this can also happen for any limit that is exhausted with active streams. Servers SHOULD only set a zero value for short durations; if a server does not wish to accept requests, closing the connection is more appropriate.
Resolution : : Add “Http2MaxConcurrentClientStreams” under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\HTTP\Parameters
In Registry and restart server.
Set this value to 100 or >100
Related
I'm migrating some code away from Active Directory re-writing all directory requests to reference classes in System.Directory.Protocols and be LDAP v3 compliant. This is supposed to be a low level v3 LDAP namespace so assumed it wouldn't be polluted with AD specific types. The following code is from a monitor background worker that was already using the System.Directory.Protocols namespace. It opens an async long running request to AD and listens for changes using the Control DirSyncRequestControl.
SearchRequest request = new SearchRequest(
mDNSearchRoot,
mLdapFilter,
SearchScope.Subtree,
mAttrsToWatch
);
request.Controls.Add(
new DirSyncRequestControl(
mCookie,
mDirSyncOptions
)
);
mConn.BeginSendRequest(
request,
mRequestTimeout,
PartialResultProcessing.NoPartialResultSupport,
endPollDirectory,
null
);
It sends a cookie as a byte[] that tells the directory when to start querying from which is handy in case the background worker crashes and needs a restart later. In the endPollDirectory callback an update cookie is received and persisted immediately to the filesystem in the event of a restart being needed we always know when we last received results from. That cookie is loaded on restart and passed back with the DirSyncRequestControl.
The issue I'm facing is that DirSyncRequestControl is operating against an OID which specifically is an Active Directory extension, not standard LDAP. Our corporate directory is on IBM based LDAP and can't have AD OIDs and Controls applied. Standard LDAP supports "Persistent Search" 2.16.840.1.113730.3.4.3 but .NET doesn't provide a Control that could be added as in the above code. There's also no way to pass arguments like a cookie. The idea with the Persistent Search control is that you open the connection and as time passes the LDAP server sends changes back which I could response to. But on initiating the connection there's no way to specify when to returns results from, only results since the request was started will be received. If the monitor were to die and a directory change happened before the monitor could restart those changes could be neve be handled.
Does anyone know if there's an existing Control compliant with standard LDAP that could be added to the request which operates the way the AD specific DirSyncRequestControl does where a start date time could be passed?
Does anyone know if there's an existing Control compliant with standard LDAP that could be added to the request which operates the way the AD specific DirSyncRequestControl does where a start date time could be passed?
Standard would be the 1.3.6.1.4.1.4203.1.9.1.1 "Sync Request" control from RFC 4533, which is the basis of "Syncrepl" directory replication in OpenLDAP and 389-ds.
(Though "standard" does not guarantee that IBM's LDAP server will support it – or that it's enabled on your server specifically, similar to how OpenLDAP requires loading the "syncprov" overlay first.)
2.2. Sync Request Control
The Sync Request Control is an LDAP Control [RFC4511] where the
controlType is the object identifier 1.3.6.1.4.1.4203.1.9.1.1 and the
controlValue, an OCTET STRING, contains a BER-encoded
syncRequestValue. The criticality field is either TRUE or FALSE.
syncRequestValue ::= SEQUENCE {
mode ENUMERATED {
-- 0 unused
refreshOnly (1),
-- 2 reserved
refreshAndPersist (3)
},
cookie syncCookie OPTIONAL,
reloadHint BOOLEAN DEFAULT FALSE
}
The Sync Request Control is only applicable to the SearchRequest
Message.
Although dotnet doesn't support this control natively (it seems to focus on just supporting Active Directory extensions), it should be possible to create a custom class similar to the DirSyncRequestControl class with the correct OID and correct BER serialization (and somehow handle the "Sync Done" control that delivers the final sync cookie to you, etc).
OpenLDAP's ldapsearch supports calling this control via ldapsearch -E sync=rp[/cookie]. On the server side, slapd supports this control for databases that have the "syncprov" overlay loaded (which is required for replication).
389-ds (Red Hat Directory Server) supports this control if the plug-in is enabled.
The other approach is to have a persistent search for (modifyTimestamp>=...) and keep track of the last received entry change timestamp in place of the "cookie". This isn't very accurate, unfortunately.
I have application which I deploy to Azure and suddenly I catch one error which wasnt in my local machine when I tested application.
Failed to load resource: the server responded with a status of 404 (Not Found)
SO when I tested application in my local machine everything works perfect without any errors, and when I move to test application live on server many option doesnt work, and in console manager I get this kind of error.
Any help, what can be problem here ?
404 is resource not found error. Most probable reasons are your files are in “../“ folder and you are trying to access file in “../..” folder.
I would suggest using URLs like /Folder/subfolder instead of relative URLs like “../parentfolder/subfolder”..
Also, its good to use “~”..
More on paths here:
https://msdn.microsoft.com/en-us/library/ms178116.aspx
I am assuming it has something to do with my app pool/IIS but why is this error only being thrown while in Production?
To Start- I have read every related question about this topic on SO.
I cannot reproduce this error in my Dev and/or Test environments. I have sent hundreds of POST requests using postman(DEV), I have also used my iOS application pointing to my Test Environment and sent thousands of requests and I am still in the same boat.
About my Project: I am using WebAPI and my application is running on iOS10 written in swift. I only receive this error occasionally and cannot reproduce it on command. I am wondering if anyone else has run into this issue and if so, what were the steps you took to take care of this problem?
Error #1
The source was not found, but some or all event logs could not be searched.
To create the source, you need permission to read all event logs to make
sure that the new source name is unique. Inaccessible logs: Security.
Note: This error is not causing me to lose any data when I receive it, all records are still being committed to my database. So it is not a huge issue since I only get the error 1 out of about 100 times a new record is submitted. But curiosity is killing me and wanted to see if anyone had any suggestions.
I am working on an asp.net (Webforms, asp.net 2.0, Framework 3.5) application. It is 32 bit application running on IIS 7.0, with OS Windows 2008 R2 SP1
I am facing an issue with large file uploads. The files which are more than 20 MB or so. The application is able to upload large files however, it is noticed that after N number of uploads, the next set of uploads keep on failing until IIS is restarted.
The application supports concurrent file uploads. It is noticed that, single large file upload always works. Only when we start upload for more than 1 file, one of the uploads get stuck.
I tried looking at the temp folders in which posted file data gets uploaded and noticed that when the issue happens, the upload for the failing file never starts from server's view point as it never generates any temp file and after few sec, the request fails.
When the things fail,
CPU is all OK
W3wp stands at 2 GB memory usage (against total 4 GB RAM)
W3wp does not show an crash as the other pages of the application still works fine
I tried using wireshark to see network traffic, but it also say ERR_connection_RESET. Apart from that, I am not getting any clue.
I am suspecting below things but not sure how to conclude or fix.
1) To start concurrent uploads, server needs to cop up with data pumping rate from client side and when it is unable to match that, it must be failing internally. This could be due to server's inability to server concurrent requests.
2) Frequent large uploads increases the memory footprint of the application to an extent where it cannot work with concurrent uploads, because to dump the files at temporary location in chunked manner, RAM is still required
Here is my web config setting
<httpRuntime maxRequestLength="2097151" executionTimeout="10800" enableVersionHeader="false"/>
From the implementation perspective,
1) We have client side implementation written in Java script, which creates FormData and sends the XHR to server
2) Server has a method which gets called when complete file is copied to server's temp directory, and we extract the file data using Request.Files collection and then processes further
When issue happens, the server method gets called, but the Request.Files comes empty.
Please let me know if anyone have very good insight on this which can guide me to the root cause and fix.
UPDATE:
Client side code representation:
//Set HTTP headers
_http.setRequestHeader("x-uploadmethod", "formdata");
_http.setRequestHeader("x-filename", "Name of file");
// Prepare form data
var data = new FormData();
data.append("Name of file", File contents);
//Sends XHR request
_http.send(data);
Server side code representation:
HttpFileCollection files = Request.Files;
int Id = objUpload.UploadMyAssets(files[0]);
The logic in UploadMyAssets is taking files[0] as HttpPostedFile and then move ahead with application specific logic.
Thanks
I had the same issue. Turns out ASP.NET Default Session Manager is blocking with async streams over https (HTTP/2). Didn't happen over http (non-ssl).
Resolved this by using SessionStateBehavior.Readonly for the Controller Class. Related to this post:
ASP.Net Asynchronous HTTP File Upload Handler
I have an application that deploys game data files to different gaming consoles. If matching files on the users machine and the console have identical size and dates, they must not be re-deployed.
On Xbox, this is easily accomplished because an XDK library used to upload files on the console allows me to set the date on the uploaded files to match the dates on the user's machine.
On Ps3 however, I use an FTP service running on the console. I use WebClient.UploadFileAsync to upload files to the console. However, I cannot figure out how I can set the uploaded file's date timestamp, leaving me with only the file size to determine identical files which is unsafe.
I was wondering if there was a way to set a file's date timestamp through the WebClient interface?
I don't think you can use the WebClient interface for this.
There seem to be various non-standard FTP extension commands implemented by some FTP servers to support the setting of a file's last modified time. The ones I know about are:
MDTM - This is the standard command for getting the a file's last modification time (as used by GetDateTimestamp()). Some servers support a set operation by specifying a timestamp argument to the command. as well as a filename.
MFMT - This was defined in an IETF experimental draft MFMT, to standardise this operation and avoid the non-standard use of the MDTM command described above.
SITE UTIME
If the FTP server running on the PS3 supports any of these extensions (check the result of the FEAT command), then you could use a simple socket FTP connection to issue the appropriate command to the server, after uploading the file.
WebClient will hand off ftp connections to FtpWebRequest. If you use FtpWebRequest directly you can send FTP commands to the server. The commands that are supported are defined as fields of WebRequestMethods.Ftp. One of those commands is GetDateTimestamp.
So if you construct an FtpWebRequest manually (instead of through WebClient) and send either the GateDateTimestamp or the ListDirectoryDetails command, you should be able to get the timestamp of the target file.