I have added the ClientBuildManager.PrecompileApplicaiton to my Azure web role as described here. The web role will start, however, takes an extraordinary long time (15-20 mins), presumably from the amount of files it has to compile (18K+).
After the role starts, I hit the site, which doesn't appear to be any faster in initial startup.
When I RDP to my web role, I can see 2 separate folders in the Temporary ASP.NET Files, one containing all of my pre-compiled code (3K+ files), the other containing a smaller set of files matching those that would be used during my initial request (50 files).
From what I can tell, the site is pre-compiling, however, the actual requests are not leveraging this pre-compilation and doing a normal on-the-fly compilation as before when a request is made.
After viewing the above, I made another request to a different page within my site and confirmed the folder with 50 files increased to 58 files, telling me it is in fact still doing on-the-fly compiling. The other folder with 3K files remained unchanged.
Here is the code I am using for my pre-compilation in my OnStart method:
var siteName = RoleEnvironment.CurrentRoleInstance.Id + "_Web";
var mainSite = serverManager.Sites[siteName];
var rootVirtualPath = String.Format("/LM/W3SVC/{0}/ROOT/", mainSite.Id);
var clientBuildManager = new ClientBuildManager(rootVirtualPath, null);
clientBuildManager.PrecompileApplication();
Am I missing something else that would force the role to use my pre-compiled files?
Here's the full code that works, note one important difference is to NOT include trailing slash in appVirtualDir
using (var serverManager = new ServerManager())
{
string siteName = RoleEnvironment.CurrentRoleInstance.Id + "_" + "Web";
var siteId = serverManager.Sites[siteName].Id;
var appVirtualDir = $"/LM/W3SVC/{siteId}/ROOT"; // Do not end this with a trailing /
var clientBuildManager = new ClientBuildManager(appVirtualDir, null, null,
new ClientBuildManagerParameter
{
PrecompilationFlags = PrecompilationFlags.Default,
});
clientBuildManager.PrecompileApplication();
}
Related
When attempting to build tests using Selenium WebDriver 4.10 and C# (.NET Framework 4.5.2) using the following code I receive a pop-up asking to select a profile. From what I understand the code below already is specifying the profile to use:
string strEdgeProfilePath = #"C:\\Users\\" + Environment.UserName + #"\\AppData\\Local\\Microsoft\\Edge\\User Data";
string strDefautProfilePath = strEdgeProfilePath + #"\\Default";
string strSeleniumProfilePath = strEdgeProfilePath + #"\\Selenium"; // <---- I have deleted this folder and copied the contents of the "Default" folder to no avail.
using (var service = EdgeDriverService.CreateDefaultService(#"C:\Temp\Selenium\Edge")) {
service.UseVerboseLogging = true;
EdgeOptions edgeOptions = new EdgeOptions();
edgeOptions.AddArgument("--user-data-dir=" + strSeleniumProfilePath);
edgeOptions.AddArgument("--profile-directory=Selenium");
driver = new EdgeDriver(#"C:\Temp\Selenium\Edge", edgeOptions); // <----- msedgeserve.exe located here
}
Even when I select the profile Microsoft Edge does not continue and the test times out.
What can I do to prevent the profile selection and get the test to run?
There're some problems in your code:
You don't need to use double backslash, backslash is enough.
The value for --user-data-dir is not right. Using strEdgeProfilePath for it is enough.
So the right code should be like this:
string strEdgeProfilePath = #"C:\Users\" + Environment.UserName + #"\AppData\Local\Microsoft\Edge\User Data";
...
\\Here you set the path of the profile ending with User Data not the profile folder
edgeOptions.AddArgument("--user-data-dir=" + strEdgeProfilePath);
\\Here you specify the actual profile folder
\\If it is Default profile, no need to use this line of code
edgeOptions.AddArgument("--profile-directory=Selenium");
Besides, to avoid running Edge processes errors, you need to follow the back up steps in this answer. Please use the backup folder instead of the original folder in the code.
I'm fairly certain I'm either doing something wrong, or understanding something wrong. It's hard to give a piece of code to show my problem, so I'm going to try explaining my scenario, with the outcome.
I'm starting up several instances of a DLL, in the same console application, but in it's own app domain. I then generated a Guid.NewGuid() that I assign to a class in the instance, and set the application's folder to a new folder. This works great so far. I can see everything works great, and my instances are seperated. However... when I started changing my app's folder to the same name as the unique GUID generated for that class I started picking up anomolies.
It works fine, when I instantiate the new instances slowly, but when I hammer new ones in, the application started picking up data in its folder, when it started up. After some investigation, I found that its because that folder already exist, due to that GUID already being instantiated. On further investigation, I can see that the machine takes a bit of a pause, and then continues to generated the new instances, all with the same GUID.
I understand that the GUID generating algorithm uses the MAC as part of it, but I was under the impression that even if the same machine, at the same exact moment generates two GUIDs, it would still be unique.
Am I correct in that statement? Where am I wrong?
Code :
Guid guid = Guid.NewGuid();
string myFolder = Path.Combine(baseFolder, guid.ToString());
AppDomain ad = AppDomain.CurrentDomain;
Console.WriteLine($"{ad.Id} - {guid.ToString()}");
string newHiveDll = Path.Combine(myFolder, "HiveDriveLibrary.dll");
if (!Directory.Exists(myFolder))
{
Directory.CreateDirectory(myFolder);
}
if (!File.Exists(newHiveDll))
{
File.Copy(hiveDll, newHiveDll);
}
Directory.SetCurrentDirectory(myFolder);
var client = ServiceHelper.CreateServiceClient(serviceURL);
ElementConfig config = new ElementConfig();
ElementConfig fromFile = ElementConfigManager.GetElementConfig();
if (fromFile == null)
{
config.ElementGUID = guid;
config.LocalServiceURL = serviceURL;
config.RegisterURL = registerServiceURL;
}
else
{
config = fromFile;
}
Directory.SetCurrentDirectory is a thin wrapper atop the Kernel 32 function SetCurrentDirectory.
Unfortunately, the .NET documentation writers didn't choose to copy the warning from the native function:
Multithreaded applications and shared library code should not use the SetCurrentDirectory function and should avoid using relative path names. The current directory state written by the SetCurrentDirectory function is stored as a global variable in each process, therefore multithreaded applications cannot reliably use this value without possible data corruption from other threads that may also be reading or setting this value
It's your reliance on this function that's creating the appearance that multiple threads have magically selected exactly the same GUID value.
I'm having a very hard time with what I feel should be a simple task. Every week, our team queries VMware vCenter for three pieces of output: VM counts in three different locations. Here is what it looks like:
Name Value
---- -----
locationA 1433
locationB 278
locationC 23
The information is emailed to our team, as well as some of the higher-ups who like to see the data. This is all automated with a Powershell script and Windows Task Scheduler running on a server, no problems.
That data is also placed in a Google sheet. We just append a new row with the date, and copy and paste the data into the three existing columns. It takes 30 seconds, once a week. Seems silly given how little time it takes to copy it over to the Google sheet but I really want to automate that last process using Google Sheets API.
I seem to keep finding and persuing what feel are online wild goose chases, in the Google scripting to accessing and editing Google sheets. I've downloaded and installed the Sheets API libraries, Drive API libraries, the Google .net library, set up the Google developer site, and run through the Google sheets API documentation and OAuth authenticating. I'm using Visual Studio 2013 because I figured that would play the best with Powershell and calling the .net commands.
I have pretty much no coding experience outside of Powershell (if you can call that coding). I can't even figure out how to pull the Google sheet, much less do anything to it. Nothing I've tried is working so far, and for what little time it takes to copy this info manually every week I've already spent so much more time than is probably worth it. I feel like if I can get a handle on this, that would open the door for further Google automation in the future since we operate with a Google domain. At any rate, help is very much appreciated.
Here is my latest scripting attempt in Visual Studio:
using System;
using Google.GData.Client;
using Google.GData.Spreadsheets;
namespace MySpreadsheetIntegration
{
class Program {
static void Main(string[] args)
{
string CLIENT_ID = "abunchofcharacters.apps.googleusercontent.com";
string CLIENT_SECRET = "secretnumber";
string REDIRECT_URI = "https://code.google.com/apis/console";
OAuth2Parameters parameters = new OAuth2Parameters();
parameters.ClientId = CLIENT_ID;
parameters.ClientSecret = CLIENT_SECRET;
parameters.RedirectUri = REDIRECT_URI;
parameters.Scope = SCOPE;
string authorizationUrl = OAuthUtil.CreateOAuth2AuthorizationUrl(parameters);
Console.WriteLine(https://code.google.com/apis/console);
Console.WriteLine("Please visit the URL above to authorize your OAuth "
+ "request token. Once that is complete, type in your access code to "
+ "continue..."));
parameters.AccessCode = Console.ReadLine();
OAuthUtil.GetAccessToken(parameters);
string accessToken = parameters.AccessToken;
Console.WriteLine("OAuth Access Token: " + accessToken);
GOAuth2RequestFactory requestFactory =
new GOAuth2RequestFactory(null, "MySpreadsheetIntegration-v1", parameters);
SpreadsheetsService service = new SpreadsheetsService("MySpreadsheetIntegration-v1");
service.RequestFactory = requestFactory;
var driveService = new DriveService(auth);
var file = new File();
file.Title = "VSI - VM Totals by Service TEST";
file.Description = string.Format("Created via {0} at {1}", ApplicationName, DateTime.Now.ToString());
file.MimeType = "application/vnd.google-apps.spreadsheet";
var request = driveService.Files.Insert(file);
var result = request.Fetch();
var spreadsheetLink = "https://docs.google.com/spreadsheets/d/GoogleDoc_ID";
Console.WriteLine("Created at " + spreadsheetLink);
End Class;
End Namespace;
}
}
}
For anyone still following this, I found a solution. I was going about this entirely the wrong way (or at least a way that I could comprehend). One solution to my issue was to create a new Google Script that only accessed my email once a week (after we got the report) and teased out everything but the data I was looking for and sent it to the Google spreadsheet.
Here's the script:
function SendtoSheet(){
var threads = GmailApp.search("from:THESENDER in:anywhere subject:THESUBJECTOFTHEEMAILWHICHNEVERCHANGES")[0];
var message = threads.getMessages().pop()
var bodytext = message.getBody();
counts = []
bodytext = bodytext.split('<br>');
for (var i=0 ; i < bodytext.length; i++){
line = bodytext[i].split(':');
if (line.length > 0){
if (!isNaN(line[1])){counts.push(line[1]);}}}
var now = new Date()
counts.unshift(Utilities.formatDate(now, 'EST', 'MM/dd/yyyy'))
var sheet = SpreadsheetApp.openById("GoogleDocID")
sheet = sheet.setActiveSheet(sheet.getSheetByName("Data by Week"))
sheet.appendRow(counts)
}
That Counts array contains the magic to extract the numeric data by breaking up by line breaks and :'s. Works perfectly. It didn't involve figuring out how to use Visual Studio, or the .net Google libraries, or editing the running PowerShell script. Clean and easy.
Hope this helps someone.
Using Rotativa 1.6.4 from NuGet and have noticed the following issue using the code below.
ActionAsPdf hangs randomly for indeterminate amount of time.
Code below that is hanging:
var pdfResult = new ActionAsPdf("Report", new {id = Request.Params["id"]})
{
Cookies = cookieCollection,
FormsAuthenticationCookieName = FormsAuthentication.FormsCookieName,
CustomSwitches = "--load-error-handling ignore"
};
Background info that may help:
The customSwitches is in use to ignore a documented issue calling wkhtmltopdf.exe using the ActionAsPdf, but it does not suppress errors in the code only in the wkhtmltopdf call.
Observations, usage and testing:
It works but when running the application (whether or not stepping through code), it can be anywhere from 10 seconds up to about 4 minutes between hitting the pdfResult = new ActionAsPdf and finally entering into the "Report" action being called. Can't discern anything actually happening in the output window of Visual Studio, no errors are being thrown that I have found. Just random slow transition into the Reports() action.
I can run the Reports() action directly via URL and it never slows like this and is quite fast for PDF generation. I am running it using the ActionAsPdf to obtain the binary to save to file system and send via email, which is the prescribed method of doing so for this library.
The behavior exists on both a local Windows 10 dev box and a remote Server 2008R2 Test box. .Net 4.5.1 on both boxes, default IIS on each.
Questions I have:
Any idea on what might cause this slow down and how to remedy it?
I ended up using UrlAsPdf() instead of ActionAsPdf() and it works. Seems there may be some issues with the ActionAsPdf() and I have filed a bug with Rotative project on GitHub. The ActionAsPdf() is still marked as beta, so hopefully it get's fixed in future versions or by the community.
In my case, I had to do few more tweaks along with using UrlAsPdf(). I have narrowed down the issue to the cookie collection that I was adding. So I tried just adding the cookie that I needed, and the issue was resolved. Following is the sample code that I have used.
var report = new UrlAsPdf(url);
Dictionary<string, string> cookieCollection = new Dictionary<string, string>();
foreach (var key in Request.Cookies.AllKeys)
{
if (Crypto.Hash("_user").Equals(key))
{
cookieCollection.Add(key, Request.Cookies.Get(key).Value);
break;
}
}
report.Cookies = cookieCollection;
report.FormsAuthenticationCookieName = FormsAuthentication.FormsCookieName;
Is there a way in either Javascript or C# to tell if the browser that someone is using has disabled caching of static content?
I need to be able to test whether or not the browser is optimized for caching.
UPDATE
I did a bit more investigation of the problem and you can find more detailed answer in my recent post
Note, the solution described below (initially) is not cross browser solution.
Not sure if it helps, but you can try the following trick:
1. Add some resource to you page, let's say it will be javascript file cachedetect.js.
2. The server should generate cachedetect.js each time someone request it. And it should contain cache-related headers in response, i.e. if browser's cache is enabled the resource should be cached for long time. Each cachedetect.js should look like this:
var version = [incrementally generated number here];
var cacheEnabled; //will contain the result of our check
var cloneCallback;//function which will compare versions from two javascript files
function isCacheEnabled(){
if(!window.cloneCallback){
var currentVersion = version;//cache current version of the file
// request the same cachedetect.js by adding <script> tag dynamically to <header>
var head = document.getElementsByTagName("head")[0];
var script = document.createElement('script');
script.type = 'text/javascript';
script.src = "cachedetect.js";
// newly loaded cachedetect.js will execute the same function isCacheEnabled, so we need to prevent it from loading the script for third time by checking for cloneCallback existence
cloneCallback = function(){
// once file will be loaded, version variable will contain different from currentVersion value in case when cache is disabled
window.cacheEnabled = currentVersion == window.version;
};
head.appendChild(script);
} else {
window.cloneCallback();
}
}
isCacheEnabled();
After that you can simply check for cacheEnabled === true or cacheEnabled === false after some period of time.
I believe this should work: http://jsfiddle.net/pseudosavant/U2hdy/
Basically you have to preload a file twice and check how long it took. The second time should take less than 10ms (in my own testing). You will want to make sure the file you are testing is sufficiently large that it takes at bit to download, it doesn't have to be huge though.
var preloadFile = function(url){
var start = +new Date();
var file = document.createElement("img");
file.src = url;
return +new Date() - start;
};
var testFile = "http://upload.wikimedia.org/wikipedia/en/thumb/d/d2/Mozilla_logo.svg/2000px-Mozilla_logo.svg.png"
var timing = [];
timing.push(preloadFile(testFile));
timing.push(preloadFile(testFile));
caching = (timing[1] < 10); // Timing[1] should be less than 10ms if caching is enabled
Another approach that involves client and server.
Make a call to a page/endpoint, which will set a random unique id in response. Set the cache header for this page/endpoint
Make the same call again, which will set a different unique number
If the the numbers match it is coming from cache or it is coming from server