I have a private bucket that generates pre-signed URLs that expires in 300 seconds (5 minutes), in total there are 10 images stored, but 1 image always generates an expired URL (it never updates the expire time). I have tried with different browsers and devices in different computers, it isn't a caché or temp problem.
I'm using AWSSDK 1.5.2.2, my code to generate the URLs is the next:
public string GetPreSignedURL(string bucketName, string keyName, System.DateTime expiration)
{
GetPreSignedUrlRequest urlRequest = new GetPreSignedUrlRequest();
urlRequest.BucketName = bucketName;
urlRequest.Key = keyName;
urlRequest.Expires = expiration;
urlRequest.Protocol = this.AwsProtocol;
return this.S3.GetPreSignedURL(urlRequest);
}
And I call it like this:
this.image = this.AWSManager.GetPreSignedURL(bucketName, keyName, this.GetExpireTime());
The GetExpireTime() method is this:
private DateTime GetExpireTime()
{
int expireTime;
try
{
expireTime = Convert.ToInt32(System.Configuration.ConfigurationManager.AppSettings["expireTime"].ToString());
}
catch
{
expireTime = defaultExpireTime;
}
return DateTime.Now.AddSeconds(expireTime);
}
Of the 10 images that are stored in AWS, there is always one, and the same image that returns the same URL everytime it is generated (only happens in production, so I have no way to debug). I connected the develop environment to the production bucket, and replicated the DB register to see if I can replicate the problem, but in my local machine the URL gets generated just fine.
Here is a URL that was generated at 9:34 AM:
https://gtidev-masorden.s3.amazonaws.com/P4285735_thumbnail.png?AWSAccessKeyId=AKIAJTCJB4KPHVI4HTXQ&Expires=1550177695&Signature=f0Llq2syEvDvgdhxYHeedHCpD8s%3D
And here is one generated at 10:00 AM:
https://gtidev-masorden.s3.amazonaws.com/P4285735_thumbnail.png?AWSAccessKeyId=AKIAJTCJB4KPHVI4HTXQ&Expires=1550177695&Signature=f0Llq2syEvDvgdhxYHeedHCpD8s%3D
I compare every generated URL here:
https://text-compare.com/
And it's always the same URL, no changes in Expires param or Signature param. When I reupload the picture, it starts to generate valid urls again but a few minutes later, another picture starts to fail.
EDIT:I have been requesting the same image for about 4 days and the expiration date is the same. When I open the URL in a new tab, this is what it says:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<Expires>2019-02-14T20:54:55Z</Expires>
<ServerTime>2019-02-18T17:09:27Z</ServerTime>
<RequestId>C78E43335EE8E845</RequestId>
<HostId>
PPHXhK6Oj7PEKOqb8io1IcVY6mfqNM5zc89ttLylzH/DKPldIo0v8pukdW4SZmqACAVn8WSyIu0=
</HostId>
</Error>
Perhaps I'm missing somethig, and I don't want to update the AWSSDK because others projects depend on it. Any ideas?
Related
I'm not sure why am I getting this result. I'm running this on a Linux server. (It's my small web site's shared web hosting account.)
The files are grouped as follows:
and the Another Dir has one file inside:
So I'm trying to retrieve the contents of the badname directory inside 1somelongdir1234567 directory that doesn't exist on the server, using this code:
try
{
FtpWebRequest ftpRequest = (FtpWebRequest)WebRequest.Create(
"ftp://server12.some-domain.com/public_html/1somelongdir1234567/badname");
ftpRequest.EnableSsl = true;
ftpRequest.Credentials = new NetworkCredential("user", "password");
ftpRequest.KeepAlive = true;
ftpRequest.Timeout = -1;
ftpRequest.Method = WebRequestMethods.Ftp.ListDirectoryDetails;
using (FtpWebResponse response1 = (FtpWebResponse)ftpRequest.GetResponse())
{
//*****BEGIN OF EDIT*****
Console.WriteLine(response1.StatusDescription);
Console.WriteLine(response1.StatusCode);
//*****END OF EDIT*****
using (StreamReader streamReader = new StreamReader(response1.GetResponseStream()))
{
List<string> arrList = new List<string>();
for (; ; )
{
string line = streamReader.ReadLine();
//I get to here, where `line` is null????
if (string.IsNullOrEmpty(line))
break;
arrList.Add(line);
//*****BEGIN OF EDIT*****
Console.WriteLine(line);
//*****END OF EDIT*****
}
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
So as you see, there's no such folder badname but instead of throwing an exception, my ftpRequest.GetResponse() succeeds and then streamReader.ReadLine() returns null like I showed in the code above.
More strangely, if I provide an actual directory as such:
FtpWebRequest ftpRequest = (FtpWebRequest)WebRequest.Create(
"ftp://server12.some-domain.com/public_html/1somelongdir1234567/Another%20Dir");
the streamReader.ReadLine() still returns null.
Can someone explain why?
Edit: OK, guys, I updated the code above to retrieve the status code. I'm still puzzled though.
First, here's three values of connection URI and the response/output that I'm getting:
Example 1:
//Existing folder
"ftp://server12.some-domain.com/public_html/1somelongdir1234567"
Output:
150 Accepted data connection
OpeningData
drwxr-xr-x 3 username username 4096 Sep 5 05:51 .
drwxr-x--- 118 username 99 4096 Sep 5 05:54 ..
drwxr-xr-x 2 username username 4096 Sep 5 05:52 Another Dir
-rw-r--r-- 1 username username 11 Sep 5 05:51 test123.txt
Example 2:
//Another existing folder
"ftp://server12.some-domain.com/public_html/1somelongdir1234567/Another%20Dir"
Output:
150 Accepted data connection
OpeningData
Example 3:
//Nonexistent folder
"ftp://server12.some-domain.com/public_html/1somelongdir1234567/SomeBogusName"
Output:
150 Accepted data connection
OpeningData
So why is it giving me the same result for example 2 as I'm getting for 3?
As for what ftp server it is, I wasn't able to see it in the network logs nor in FtpWebRequest itself. Here's what I was able to get from Microsoft Network Monitor:
Welcome to Pure-FTPd [privsep] [TLS] ----------..
220-You are user number 4 of 50 allowed...
220-Local time is now 13:14. Server port: 21...
220-This is a private system - No anonymous login..
220-IPv6 connections are also welcome on this server...
220 You will be disconnected after 15 minutes of inactivity...
The URL for the ListDirectory/ListDirectoryDetails method should end with a slash, in general.
Without a slash, results tend to be uncertain.
WebRequest.Create("ftp://example.com/public_html/1somelongdir1234567/Another%20Dir/");
As stuartd noted, you get an FTP status code on the response, As far as I know, because of the way FTP commands work, you will never get an exception when a server fails to execute a command you requested. The server will instead just tell you it failed through the FTP response code.
One thing you have to realize is that FTP has no standards for the text it sends back, because it was never designed for its responses to be machine-interpreted. If you got some time, look up the code of FileZilla, and check their class for interpreting the file listings (directorylistingparser.cpp). There are dozens of ways to interpret it, and the application will just try them all until one works. Because, unless you know what kind of FTP you're connecting to, that's the only way to do it.
So if that specific server decides to send an empty string back if you request a nonexistent directory, that's just how their FTP implementation does it, and it's the client's problem to figure out what that means.
If you can't figure out it's a failure from the FTP response code, you can just test if it returns something different for an existing but empty folder. If so, you can easily distinguish between that and this empty response, and treat the empty response as "not found" error in your program.
I'm having a very hard time with what I feel should be a simple task. Every week, our team queries VMware vCenter for three pieces of output: VM counts in three different locations. Here is what it looks like:
Name Value
---- -----
locationA 1433
locationB 278
locationC 23
The information is emailed to our team, as well as some of the higher-ups who like to see the data. This is all automated with a Powershell script and Windows Task Scheduler running on a server, no problems.
That data is also placed in a Google sheet. We just append a new row with the date, and copy and paste the data into the three existing columns. It takes 30 seconds, once a week. Seems silly given how little time it takes to copy it over to the Google sheet but I really want to automate that last process using Google Sheets API.
I seem to keep finding and persuing what feel are online wild goose chases, in the Google scripting to accessing and editing Google sheets. I've downloaded and installed the Sheets API libraries, Drive API libraries, the Google .net library, set up the Google developer site, and run through the Google sheets API documentation and OAuth authenticating. I'm using Visual Studio 2013 because I figured that would play the best with Powershell and calling the .net commands.
I have pretty much no coding experience outside of Powershell (if you can call that coding). I can't even figure out how to pull the Google sheet, much less do anything to it. Nothing I've tried is working so far, and for what little time it takes to copy this info manually every week I've already spent so much more time than is probably worth it. I feel like if I can get a handle on this, that would open the door for further Google automation in the future since we operate with a Google domain. At any rate, help is very much appreciated.
Here is my latest scripting attempt in Visual Studio:
using System;
using Google.GData.Client;
using Google.GData.Spreadsheets;
namespace MySpreadsheetIntegration
{
class Program {
static void Main(string[] args)
{
string CLIENT_ID = "abunchofcharacters.apps.googleusercontent.com";
string CLIENT_SECRET = "secretnumber";
string REDIRECT_URI = "https://code.google.com/apis/console";
OAuth2Parameters parameters = new OAuth2Parameters();
parameters.ClientId = CLIENT_ID;
parameters.ClientSecret = CLIENT_SECRET;
parameters.RedirectUri = REDIRECT_URI;
parameters.Scope = SCOPE;
string authorizationUrl = OAuthUtil.CreateOAuth2AuthorizationUrl(parameters);
Console.WriteLine(https://code.google.com/apis/console);
Console.WriteLine("Please visit the URL above to authorize your OAuth "
+ "request token. Once that is complete, type in your access code to "
+ "continue..."));
parameters.AccessCode = Console.ReadLine();
OAuthUtil.GetAccessToken(parameters);
string accessToken = parameters.AccessToken;
Console.WriteLine("OAuth Access Token: " + accessToken);
GOAuth2RequestFactory requestFactory =
new GOAuth2RequestFactory(null, "MySpreadsheetIntegration-v1", parameters);
SpreadsheetsService service = new SpreadsheetsService("MySpreadsheetIntegration-v1");
service.RequestFactory = requestFactory;
var driveService = new DriveService(auth);
var file = new File();
file.Title = "VSI - VM Totals by Service TEST";
file.Description = string.Format("Created via {0} at {1}", ApplicationName, DateTime.Now.ToString());
file.MimeType = "application/vnd.google-apps.spreadsheet";
var request = driveService.Files.Insert(file);
var result = request.Fetch();
var spreadsheetLink = "https://docs.google.com/spreadsheets/d/GoogleDoc_ID";
Console.WriteLine("Created at " + spreadsheetLink);
End Class;
End Namespace;
}
}
}
For anyone still following this, I found a solution. I was going about this entirely the wrong way (or at least a way that I could comprehend). One solution to my issue was to create a new Google Script that only accessed my email once a week (after we got the report) and teased out everything but the data I was looking for and sent it to the Google spreadsheet.
Here's the script:
function SendtoSheet(){
var threads = GmailApp.search("from:THESENDER in:anywhere subject:THESUBJECTOFTHEEMAILWHICHNEVERCHANGES")[0];
var message = threads.getMessages().pop()
var bodytext = message.getBody();
counts = []
bodytext = bodytext.split('<br>');
for (var i=0 ; i < bodytext.length; i++){
line = bodytext[i].split(':');
if (line.length > 0){
if (!isNaN(line[1])){counts.push(line[1]);}}}
var now = new Date()
counts.unshift(Utilities.formatDate(now, 'EST', 'MM/dd/yyyy'))
var sheet = SpreadsheetApp.openById("GoogleDocID")
sheet = sheet.setActiveSheet(sheet.getSheetByName("Data by Week"))
sheet.appendRow(counts)
}
That Counts array contains the magic to extract the numeric data by breaking up by line breaks and :'s. Works perfectly. It didn't involve figuring out how to use Visual Studio, or the .net Google libraries, or editing the running PowerShell script. Clean and easy.
Hope this helps someone.
I have added the ClientBuildManager.PrecompileApplicaiton to my Azure web role as described here. The web role will start, however, takes an extraordinary long time (15-20 mins), presumably from the amount of files it has to compile (18K+).
After the role starts, I hit the site, which doesn't appear to be any faster in initial startup.
When I RDP to my web role, I can see 2 separate folders in the Temporary ASP.NET Files, one containing all of my pre-compiled code (3K+ files), the other containing a smaller set of files matching those that would be used during my initial request (50 files).
From what I can tell, the site is pre-compiling, however, the actual requests are not leveraging this pre-compilation and doing a normal on-the-fly compilation as before when a request is made.
After viewing the above, I made another request to a different page within my site and confirmed the folder with 50 files increased to 58 files, telling me it is in fact still doing on-the-fly compiling. The other folder with 3K files remained unchanged.
Here is the code I am using for my pre-compilation in my OnStart method:
var siteName = RoleEnvironment.CurrentRoleInstance.Id + "_Web";
var mainSite = serverManager.Sites[siteName];
var rootVirtualPath = String.Format("/LM/W3SVC/{0}/ROOT/", mainSite.Id);
var clientBuildManager = new ClientBuildManager(rootVirtualPath, null);
clientBuildManager.PrecompileApplication();
Am I missing something else that would force the role to use my pre-compiled files?
Here's the full code that works, note one important difference is to NOT include trailing slash in appVirtualDir
using (var serverManager = new ServerManager())
{
string siteName = RoleEnvironment.CurrentRoleInstance.Id + "_" + "Web";
var siteId = serverManager.Sites[siteName].Id;
var appVirtualDir = $"/LM/W3SVC/{siteId}/ROOT"; // Do not end this with a trailing /
var clientBuildManager = new ClientBuildManager(appVirtualDir, null, null,
new ClientBuildManagerParameter
{
PrecompilationFlags = PrecompilationFlags.Default,
});
clientBuildManager.PrecompileApplication();
}
I have an winform/OCX that consumes a qlikview document. We have gotten a patch from QV so that RefreshDocument works in the OCX as the RefreshDocument does in QV application. But the Application shows a nice enabled button when the document has been reload on the server.
Does anyone know what needs to be done to detect that. Either in C# or in macro code or ManagementAPI ?
This is the ReloadDocument Code.
private void button2_Click(object sender, EventArgs e)
{
var myBloodybookmarkHack = "dynaBookmark" + Guid.NewGuid().ToString().Replace("-","");
axQlikOCX1.ActiveDocument.CreateUserBookmark(myBloodybookmarkHack, true);
//axQlikOCX1.OpenDocument(#"qvp://qvSeverName/path/MyDocument.qvw?bookmark=Server\dynaBookmarkb5aa82ae467540fdb0d18bb499044ed9");
axQlikOCX1.RefreshDocument();
axQlikOCX1.ActiveDocument.RecallUserBookmark(myBloodybookmarkHack);
axQlikOCX1.ActiveDocument.RemoveUserBookmark(myBloodybookmarkHack);
}
By suppressing the paint event I get this to run pretty ok. Next patch will include that it keeps the selections (Will be fixed in 11.2 servicerelease 6).
You need to detect if CreateUserBookmark was successfull or not and not restore the bookmark if the creation failed.
This code works in QV 11.2 serviceRelease 5.
The filesystem reads a new modified time when the qvw file is rewritten after load. Assuming the data portion of this application is not broken out from the QVW file. Likely, you could come very close to accomplishing this by checking for new timestamps. Alternatively, if logging is enabled in the qvw document you could log read the text file* that QlikView generates to accomplish the same thing.
*The text file writes are delayed sometimes so your file might be refreshed a little bit before the log states that it is.
We ended up using the QV Management api to get the last task reload time
Download the Qv management api demo from QV
This code shows you how to get tasks on a document. Through that you get when "last document reload task" was finished.
private DateTime GetLastDocumentRun(string documentName)
{
string QMS = "http://MyQlikviewserver:4799/QMS/Service";
var client = new QMSClient("BasicHttpBinding_IQMS", QMS);
string key = client.GetTimeLimitedServiceKey();
ServiceKeyClientMessageInspector.ServiceKey = key;
var taskStatusFilter = new TaskStatusFilter();
var clientTaskStatuses = client.GetTaskStatuses(taskStatusFilter, TaskStatusScope.All);
foreach (var taskStatus in clientTaskStatuses)
{
Trace.WriteLine(taskStatus.General.TaskName);
if (taskStatus.General.TaskName.ToLower().Contains(documentName.ToLower()))
{
string fin = taskStatus.Extended.FinishedTime + "";
DateTime finishedTime;
if (DateTime.TryParse(fin, out finishedTime))
return finishedTime;
Logger.ErrMessage("QvManagementApi.GetLastDocumentRun",new Exception("Task finished time did not return a valid datetime value:" + fin));
return DateTime.MinValue;
}
}
return DateTime.MinValue;
}
This is slow, so you should run on a different thread.
Also this does not show if the task is successfully reloaded. We haven't fix that yet but on taskStatus.Extended you have the last log, which you can text parse to get if it was successfully reloaded or not.
If I understand correctly you want to know if a document has finished reloading on a QlikView server right?
I've you OCX application has a constant connection, you could evaluate the ReloadTime() function in the document which would tell you when the document was last reloaded. If you listen for the function and issuing a DocumentRefresh while doing this, then you would get a changed timestamp once the newly reloaded document becomes avaible on the server.
The code your posting, does not reload a QlikView document. At least not in QlikView lingo, it just open the documents on the server.
Please elaborate if I misunderstand you.
Regards Torber
Is there a way in either Javascript or C# to tell if the browser that someone is using has disabled caching of static content?
I need to be able to test whether or not the browser is optimized for caching.
UPDATE
I did a bit more investigation of the problem and you can find more detailed answer in my recent post
Note, the solution described below (initially) is not cross browser solution.
Not sure if it helps, but you can try the following trick:
1. Add some resource to you page, let's say it will be javascript file cachedetect.js.
2. The server should generate cachedetect.js each time someone request it. And it should contain cache-related headers in response, i.e. if browser's cache is enabled the resource should be cached for long time. Each cachedetect.js should look like this:
var version = [incrementally generated number here];
var cacheEnabled; //will contain the result of our check
var cloneCallback;//function which will compare versions from two javascript files
function isCacheEnabled(){
if(!window.cloneCallback){
var currentVersion = version;//cache current version of the file
// request the same cachedetect.js by adding <script> tag dynamically to <header>
var head = document.getElementsByTagName("head")[0];
var script = document.createElement('script');
script.type = 'text/javascript';
script.src = "cachedetect.js";
// newly loaded cachedetect.js will execute the same function isCacheEnabled, so we need to prevent it from loading the script for third time by checking for cloneCallback existence
cloneCallback = function(){
// once file will be loaded, version variable will contain different from currentVersion value in case when cache is disabled
window.cacheEnabled = currentVersion == window.version;
};
head.appendChild(script);
} else {
window.cloneCallback();
}
}
isCacheEnabled();
After that you can simply check for cacheEnabled === true or cacheEnabled === false after some period of time.
I believe this should work: http://jsfiddle.net/pseudosavant/U2hdy/
Basically you have to preload a file twice and check how long it took. The second time should take less than 10ms (in my own testing). You will want to make sure the file you are testing is sufficiently large that it takes at bit to download, it doesn't have to be huge though.
var preloadFile = function(url){
var start = +new Date();
var file = document.createElement("img");
file.src = url;
return +new Date() - start;
};
var testFile = "http://upload.wikimedia.org/wikipedia/en/thumb/d/d2/Mozilla_logo.svg/2000px-Mozilla_logo.svg.png"
var timing = [];
timing.push(preloadFile(testFile));
timing.push(preloadFile(testFile));
caching = (timing[1] < 10); // Timing[1] should be less than 10ms if caching is enabled
Another approach that involves client and server.
Make a call to a page/endpoint, which will set a random unique id in response. Set the cache header for this page/endpoint
Make the same call again, which will set a different unique number
If the the numbers match it is coming from cache or it is coming from server