I have about 50 background images for my site. What i am looking to do is randomly present the user with a different one for every visit. By this i mean they will surf through the site with the same background image during their visit.
After they close the browser and re-visit or come back and visit later, they then are presented with a new random background image. Don't need to save anything on what their previous background image was, just a random new one for each new visit to the site.
Not sure it this can be done with C#, Javascript, JQuery or CSS.
EDIT: I am using ASP.net 4.0 C# for my web app. Thanks
Don't use cookies as stated in the comments. This will only add extra bandwidth to the header messages sent to the server.
Instead, use local storage in the browser to save what the last image was they used. When a new session is started increment this value, and display the next image.
I've used jStorage on projects and it works fine.
You can save the currently shown image in their browsers storage, and maybe a session ID. Later, you can check if the session ID has changed. If so, then change to a different image.
var image = $.jStorage.get("image", 0);
var session_id = $.jStorage.get("session", "put current session id here");
if(session_id != "current session id")
{
image = (image < 50) ? 0 : image+1;
$.jStorage.set("image",image);
$.jStorage.set("session","current session id");
}
// use image to set background
EDIT:
Don't place this JavaScript in each web page. Instead, place it in a ASP.NET page that responses as a Javascript content type and load it via the page's header. This way page caching on the browser won't affect the script when the session changes.
Keep it in the Session. Pick it at random when it's not already in the session, it will stay the same as long as they're at your site -- and next time they come back, they'll get a new one.
For example (my C# is a little rusty):
public getBackground (HttpSessionState session) {
String bg = (string) session["session.randomBG"];
if (bg == null) {
// pick a random BG & store it.
bg = "pick one";
session["session.randomBG"] = bg;
}
return bg;
}
Hope this helps!
var list = [
"/images01.png",
"/images02.png",
...
];
/*background url*/ = list[Math.floor(Math.random()*list.length];
Sure it is possible. I will use pseudo-code here to show you how it could be done. Surely soon examples in Java will appear.
In the beginning of each page:
StartSession()
If ! SessionVariable[myBackground] then
x=Randomize 50
SessionVariable[myBackground]="image0" + x + ".jpg"
endif
<style>
body {background-image:url(SessionVariable[myBackground];}
</style>
Make sure you use the style tag where appropriate. The SessionVariable[myBackground] is user-created. In PHP it would look like this:
$_SESSION['myBackground']
Best wishes,
Try this function:
/**
* Change background image hourly.
* Name your images with 0.jpg, 1.jpg, ..., 49.jpg.
*/
function getBackground2() {
var expires = 3600000,
numOfImages = 50,
seed = Math.round(Date.now() / expires % numOfImages);
return '/path/to/background/' + seed + '.jpg';
}
Related
I'm trying to take a full screen screenshot of my page by using this code:
public void OpenEyesForVisualTesting(string testName) {
this.seleniumDriver.Driver = this.eyes.Open(this.seleniumDriver.Driver, "Zinc", testName);
}
public void CheckScreenForVisualTesting() {
this.eyes.Check("Zinc", Applitools.Selenium.Target.Window().Fully());
}
public void CloseEyes() {
this.eyes.close();
}
but instead I just get a half a page of the screenshot, I tried to contact Applitools but they just told me to replace eyes.checkwindow() to eyes.Check("tag", Target.Window().Fully()); which still didn't work.
If anyone can help me that would be great.
I work for Applitools and sorry for your troubles. Maybe you did not see our replies or they went to your spam folder. You need to set ForceFullPageScreenshot = true and StitchMode = StitchModes.CSS in order to capture a full page screenshot.
The below code example is everything you'd need to do in order to capture a full page image. Also, please make sure your .Net Eyes.Sdk version is >= 2.6.0 and Eyes.Selenium >= 2.5.0.
If you have any further questions or still encountering issues with this please feel free to email me directly. Thanks.
var eyes = new Eyes();
eyes.ApiKey = "your-api-key";
eyes.ForceFullPageScreenshot = true;
eyes.StitchMode = StitchModes.CSS;
eyes.Open(driver, "Zinc", testName, new Size(800, 600)); //last parameter is your viewport size IF testing on a browser. Do not set if testing on a mobile devices.
eyes.CheckWindow("Zinc");
//or new Fluet API method
eyes.Check("Zinc", Applitools.Selenium.Target.Window().Fully();
eyes.Close();
Use extent Reporting library, in that you can take screen shot as well as an interactive report on pass and fail cases.
here is the link how it works:
http://www.softwaretestingmaterial.com/generate-extent-reports/
If you want to take complete page Screenshot/ Particular element with Applitools. You can use lines of code:-
Full Screenshot
eyes.setHideScrollbars(false);
eyes.setStitchMode(StitchMode.CSS);
eyes.setForceFullPageScreenshot(true);
eyes.checkWindow();
Take screenshot for particular element
WebElement Button = driver.findElement(By.xpath("//button[#id='login-btn']"));
eyes.setHideScrollbars(false);
eyes.setStitchMode(StitchMode.CSS);
eyes.open(driver, projectName, testName);
eyes.setMatchLevel(MatchLevel.EXACT);
eyes.checkElement(Button );
(I am from Applitools.)
When setting forcefullpagescreenshoot to true, applitools will try to scroll the HTML element in the page, if the HTML is not the scrollable element then you will need to set it yourself:
eyes.Check(Target.Window().ScrollRootElement(selector))
In Firefox, you can see a scroll tag near elements that are scrollable.
I am automating a task using webbrowser control , the site display pages using frames.
My issue is i get to a point , where i can see the webpage loaded properly on the webbrowser control ,but when it gets into the code and i see the html i see nothing.
I have seen other examples here too , but all of those do no return all the browser html.
What i get by using this:
HtmlWindow frame = webBrowser1.Document.Window.Frames[1];
string str = frame.Document.Body.OuterHtml;
Is just :
The main frame tag with attributes like SRC tag etc, is there any way how to handle this?Because as i can see the webpage completely loaded why do i not see the html?AS when i do that on the internet explorer i do see the pages source once loaded why not here?
ADDITIONAL INFO
There are two frames on the page :
i use this to as above:
HtmlWindow frame = webBrowser1.Document.Window.Frames[0];
string str = frame.Document.Body.OuterHtml;
And i get the correct HTMl for the first frame but for the second one i only see:
<FRAMESET frameSpacing=1 border=1 borderColor=#ffffff frameBorder=0 rows=29,*><FRAME title="Edit Search" marginHeight=0 src="http://web2.westlaw.com/result/dctopnavigation.aspx?rs=WLW12.01&ss=CXT&cnt=DOC&fcl=True&cfid=1&method=TNC&service=Search&fn=_top&sskey=CLID_SSSA49266105122&db=AK-CS&fmqv=s&srch=TRUE&origin=Search&vr=2.0&cxt=RL&rlt=CLID_QRYRLT803076105122&query=%22LAND+USE%22&mt=Westlaw&rlti=1&n=1&rp=%2fsearch%2fdefault.wl&rltdb=CLID_DB72585895122&eq=search&scxt=WL&sv=Split" frameBorder=0 name=TopNav marginWidth=0 scrolling=no><FRAME title="Main Document" marginHeight=0 src="http://web2.westlaw.com/result/dccontent.aspx?rs=WLW12.01&ss=CXT&cnt=DOC&fcl=True&cfid=1&method=TNC&service=Search&fn=_top&sskey=CLID_SSSA49266105122&db=AK-CS&fmqv=s&srch=TRUE&origin=Search&vr=2.0&cxt=RL&rlt=CLID_QRYRLT803076105122&query=%22LAND+USE%22&mt=Westlaw&rlti=1&n=1&rp=%2fsearch%2fdefault.wl&rltdb=CLID_DB72585895122&eq=search&scxt=WL&sv=Split" frameBorder=0 borderColor=#ffffff name=content marginWidth=0><NOFRAMES></NOFRAMES></FRAMESET>
UPDATE
The two url of the frames are as follows :
Frame1 whose html i see
http://web2.westlaw.com/nav/NavBar.aspx?RS=WLW12.01&VR=2.0&SV=Split&FN=_top&MT=Westlaw&MST=
Frame2 whose html i do not see:
http://web2.westlaw.com/result/result.aspx?RP=/Search/default.wl&action=Search&CFID=1&DB=AK%2DCS&EQ=search&fmqv=s&Method=TNC&origin=Search&Query=%22LAND+USE%22&RLT=CLID%5FQRYRLT302424536122&RLTDB=CLID%5FDB6558157526122&Service=Search&SRCH=TRUE&SSKey=CLID%5FSSSA648523536122&RS=WLW12.01&VR=2.0&SV=Split&FN=_top&MT=Westlaw&MST=
And the properties of the second frame whose html i do not get are in the picture below:
Thank you
I paid for the solution of the question above and it works 100 %.
What i did was use this function below and it returned me the count to the tag i was seeking which i could not find :S.. Use this to call the function listed below:
FillFrame(webBrowser1.Document.Window.Frames);
private void FillFrame(HtmlWindowCollection hwc)
{
if (hwc == null) return;
foreach (HtmlWindow hw in hwc)
{
HtmlElement getSpanid = hw.Document.GetElementById("mDisplayCiteList_ctl00_mResultCountLabel");
if (getSpanid != null)
{
doccount = getSpanid.InnerText.Replace("Documents", "").Replace("Document", "").Trim();
break;
}
if (hw.Frames.Count > 0) FillFrame(hw.Frames);
}
}
Hope it helps people .
Thank you
For taking html you have to do it that way:
WebClient client = new WebClient();
string html = client.DownloadString(#"http://stackoverflow.com");
That's an example of course, you can change the address.
By the way, you need using System.Net;
This works just fine...gets BODY element with all inner elements:
Somewhere in your Form code:
wb.Url = new Uri("http://stackoverflow.com");
wb.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(wbDocumentCompleted);
And here is wbDocumentCompleted:
void wb1DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
var yourBodyHtml = wb.Document.Body.OuterHtml;
}
wb is System.Windows.Forms.WebBrowser
UPDATE:
The same as for the document, I think that your second frame is not loaded at the time you check for it's content...You can try solutions from this link. You will have to wait for your frames to be loaded in order to see its content.
The most likely reason is that frame index 0 has the same domain name as the main/parent page, while the frame index 1 has a different domain name. Am I correct?
This creates a cross-frame security issue, and the WB control just leaves you high and dry and doesn't tell you what on earth went wrong, and just leaves your objects, properties and data empty (will say "No Variables" in the watch window when you try to expand the object).
The only thing you can access in this situation is pretty much the URL and iFrame properties, but nothing inside the iFrame.
Of course, there are ways to overcome teh cross-frame security issues - but they are not built into the WebBrowser control, and they are external solutions, depending on which WB control you are using (as in, .NET version or pre .NET version).
Let me know if I have correctly identified your problem, and if so, if you would like me to tell you about the solution tailored to your setup & instance of the WB control.
UPDATE: I have noticed that you're doing a .getElementByTagName("HTML")(0).outerHTML to get the HTML, all you need to do is call this on the document object, or the .body object and that should do it. MyDoc.Body.innerHTML should get the the content you want. Also, notice that there are additional iFrames inside these documents, in case that is of relevance. Can you give us the main document URL that has these two URL's in it so we / I can replicate what you're doing here? Also, not sure why you are using DomElement but you should just cast it to the native object it wants to be cast to, either a IHTMLDocument2 or the object you see in the watch window, which I think is IHTMLFrameElement (if i recall correctly, but you will know what i mean once you see it). If you are trying to use an XML object, this could be the reason why you aren't able to get the HTML content, change the object declaration and casting if there is one, and give it a go & let us know :). Now I'm curious too :).
Is there a way in either Javascript or C# to tell if the browser that someone is using has disabled caching of static content?
I need to be able to test whether or not the browser is optimized for caching.
UPDATE
I did a bit more investigation of the problem and you can find more detailed answer in my recent post
Note, the solution described below (initially) is not cross browser solution.
Not sure if it helps, but you can try the following trick:
1. Add some resource to you page, let's say it will be javascript file cachedetect.js.
2. The server should generate cachedetect.js each time someone request it. And it should contain cache-related headers in response, i.e. if browser's cache is enabled the resource should be cached for long time. Each cachedetect.js should look like this:
var version = [incrementally generated number here];
var cacheEnabled; //will contain the result of our check
var cloneCallback;//function which will compare versions from two javascript files
function isCacheEnabled(){
if(!window.cloneCallback){
var currentVersion = version;//cache current version of the file
// request the same cachedetect.js by adding <script> tag dynamically to <header>
var head = document.getElementsByTagName("head")[0];
var script = document.createElement('script');
script.type = 'text/javascript';
script.src = "cachedetect.js";
// newly loaded cachedetect.js will execute the same function isCacheEnabled, so we need to prevent it from loading the script for third time by checking for cloneCallback existence
cloneCallback = function(){
// once file will be loaded, version variable will contain different from currentVersion value in case when cache is disabled
window.cacheEnabled = currentVersion == window.version;
};
head.appendChild(script);
} else {
window.cloneCallback();
}
}
isCacheEnabled();
After that you can simply check for cacheEnabled === true or cacheEnabled === false after some period of time.
I believe this should work: http://jsfiddle.net/pseudosavant/U2hdy/
Basically you have to preload a file twice and check how long it took. The second time should take less than 10ms (in my own testing). You will want to make sure the file you are testing is sufficiently large that it takes at bit to download, it doesn't have to be huge though.
var preloadFile = function(url){
var start = +new Date();
var file = document.createElement("img");
file.src = url;
return +new Date() - start;
};
var testFile = "http://upload.wikimedia.org/wikipedia/en/thumb/d/d2/Mozilla_logo.svg/2000px-Mozilla_logo.svg.png"
var timing = [];
timing.push(preloadFile(testFile));
timing.push(preloadFile(testFile));
caching = (timing[1] < 10); // Timing[1] should be less than 10ms if caching is enabled
Another approach that involves client and server.
Make a call to a page/endpoint, which will set a random unique id in response. Set the cache header for this page/endpoint
Make the same call again, which will set a different unique number
If the the numbers match it is coming from cache or it is coming from server
I'm just getting started with WatiN and attempting to test a large number of pages with authentication. I've taken the approach of only creating a new instance of IE each time new login details are required. Once authenticated, the framework needs to navigate to 2 pages on the site and click a link on each to download a file (repeated multiple times within one authenticated session for different clients).
Navigating to the pages is fine and the download is working with IE9 using a combination of WatiN and SendKeys(). However, when it navigates to the second page and attempts to find the Link object by Text (which has the same text as on the previous page) it returns the download URL from the first page. This means that essentially whatever page I direct WatiN to, it still seems to be persisting the Link object from the first page.
The first method creates my browser object and returns it to the parent class:
public IE CreateBrowser(string email, string password, string loginUrl)
{
Settings.MakeNewIe8InstanceNoMerge = true;
Settings.AutoCloseDialogs = true;
IE ie = new IE(loginUrl);
ie.TextField(Find.ById("Email")).TypeText(email);
ie.TextField(Find.ById("Password")).TypeText(password);
ie.Button(Find.ById("btnLogin")).Click();
Thread.Sleep(1500);
return ie;
}
I then iterate through logins, passing the URL for each required page to the following:
public void DownloadFile(IE ie, string url)
{
//ie.NativeBrowser.NavigateTo(new Uri(url));
ie.GoTo(url);
Thread.Sleep(1000);
//TODO: Why is link holding on to old object?
Link lnk = null;
lnk = ie.Link(Find.ByText("Download file"));
lnk.WaitUntilExists();
lnk.Focus();
lnk.Click();
//Pause to allow IE file dialog box to show
Thread.Sleep(2000);
//Alt + S to save
SendKeys.SendWait("%(S)");
}
The calling method ties it all together like so (I've obfuscated some of the details):
for (int i = 0; i < loginCount; i++)
{
using (IE ie = HelperClass.CreateBrowser(lLogins[i].Email, lLogins[i].Password, ConfigurationManager.AppSettings["loginUrl"]))
{
...Gets list of clients we're wanting to check
for (int j = 0; j < clientCount; j++)
{
string url = "";
switch ()
{
case "Page1":
string startDate = "20110831";
string endDate = "20110901";
url = String.Format(page1BaseUrl, HttpUtility.UrlEncode(lClients[j].Name), startDate, endDate);
break;
case "Page2":
url = String.Format(page2BaseUrl, HttpUtility.UrlEncode(lClients[j].Name));
break;
}
HelperClass.DownloadFile(ie, url);
}
}
}
Does anyone have any idea what could be causing this or how to get around it? Do I need to create a new IE object for each request?
Okay, so I've managed to find out what was causing my Link object (and the parent Page object) to persist across multiple URLs.
It seems that because I'm clicking the Link which forces the "Save As" box in IE9, this keeps the Page object current, even as the browser runs through all the other URLs in the background. This seems to update the HTML rendered in the window but not release the existing Page object (or possibly creates additional Page objects in memory).
Because I'm using SendKeys() to hit the "Save" button, rather than a handled dialog in WatiN, the dialog stays open and persists the Page object.
From the looks of things, I need to find a different, handled way of performing my file downloads/saving.
Video can be restricted, if it's video is set to be not available for users region, if it is private or if video owner has set limitations on where it can be displayed. I don't want to display them.
Query what I have at the moment:
Feed<Video> videoFeed = request.GetStandardFeed(
"http://gdata.youtube.com/feeds/api/videos?v=2" +
"&format=5&iv_load_policy=3&q=" + this.textBox1.Text);
Initially I build a list of type item, that acts as a datasource. Precondition here could also fix my problem.
foreach (Video entry in feed.Entries)
dsList.Add(new item { ID = entry.VideoId, TITLE = entry.Title });
How do I use the Youtube API to check if a video viewing is restricted?
edit:
I assumed, I can use:
foreach (Video entry in feed.Entries)
if (entry.Status == null)
dsList.Add(new item { ID = entry.VideoId, TITLE = entry.Title });
But there are at least 2 problems with that:
Youtube api can maximally return 50 items over 10 pages per query. Maximum of 500 items - that is more, then gets used in average case. But if restricted content has higher ordering precedence (example: major label music videos), then 99% or more results can get thrown away.
Filter works for most cases, but it does not seem to work for (EMI : Coldplay - Every Teardrop Is A Waterfall (Official)), that is listed under top rated videos feed. I don't want to display:
The YouTube API includes an attribute called accessControl. This includes fields that tell you whether embedding and playback on mobile devices and televisions is allowed.
See this page for more:
https://developers.google.com/youtube/2.0/reference#youtube_data_api_tag_yt:accessControl
I would take a look at the API documentation on custom parameters.
Specifically the restriction and region parameters.
Very simple solution
Public Function IsVideoExits(VideoID As String) As Boolean
Try
If Image.FromStream(New MemoryStream(WC.DownloadData("http://i3.ytimg.com/vi/" + VideoID + "/hqdefault.jpg"))).Height > 0 Then Return True
Catch ex As Exception
Return False
End Try
End Function
You could find if the video is exits/Allowed to view in this way in 6 code lines even without the API
That's a VB code. Think U could convert to your own