I'm working on a c# service that publishes updates to a Fan Page using the c# SDK. I have it publishing updates just fine, but some of these are YouTube videos that when publish "manually" would be "embedded" and viewable on the Fan Page itself.
The main part is very straightfoward code that I found here at Stackoverflow:
dynamic messagePost = new System.Dynamic.ExpandoObject();
messagePost.access_token = [access_token];
messagePost.picture = "http://img.youtube.com/vi/abc123/default.jpg";
messagePost.link = "http://www.youtube.com/watch?v=abc123";
messagePost.name = "Test Name";
messagePost.caption = "{*actor*} " + "This is just a test...";
messagePost.description = "This is a test post description.";
All works fine except it is a link to the video instead of it being "embedded". Any guidance? I've searched for several hours now, tried different combinations, all to no avail.
Thanks!
Just goes to show that when the question seems simple, it probably is. So, in case someone else ever needs the answer to this one, I found it once I resumed searching.
Only use the .link and .name attributes. If you don't fill the remainder then the video appears "inline" on FB.
Great resource, sorry to have posted too early.
Related
Has anyone seen any documentation on the WebView2 DevToolsProtocolHelper?
In another question I asked (How do I programmatically add a file to a fileupload control from a windows form to a webpage) it was suggested that I download and use the Microsoft.Web.WebView2.DevToolsProtocolExtensions. At first it seemed like it was going to be very straight forward to use but not so much.
Win forms App using c# and webview2
DevToolsProtocolHelper helper = webView21.CoreWebView2.GetDevToolsProtocolHelper()
Task<DOM.Node> t = helper.DOM.GetDocumentAsync();
Task<int> querySelectorResponse = helper.DOM.QuerySelectorAsync(t.Result.NodeId, "#fileupload");
_ = helper.DOM.SetFileInputFilesAsync(new string[] { filename }, querySelectorResponse.Result);
These 4 lines of code should get the document and search for the node fileupload. I get nothing but errors and I have not seen any real examples or documentation on this.
Any help would be greatly appreciated.
**** UPDATE *****
DevToolsProtocolHelper helper = webView21.CoreWebView2.GetDevToolsProtocolHelper();
DOM dom = helper.DOM;
DOM.Node t = await dom.GetDocumentAsync(-1,true);
int querySelectorResponse = await dom.QuerySelectorAsync(t.NodeId, "#fileupload");
_ = helper.DOM.SetFileInputFilesAsync(new string[] { filename }, t.NodeId);
Here is the latest version of my code and it seems I have made progress. When I used CEFSHARP, the IDs I got back from Document and the #fileUpload were always the same and it worked in uploading the file.
With this code above, I am getting IDs but they are always different and I am not getting the file to upload.
Another update, when I run this code (from a button click on the winform) a second time, I do get the proper ID (504) for the int querySelectorResponse = await dom.QuerySelectorAsync(t.NodeId, "#fileupload") line of code. Again, still not getting the file to upload to the page.
Again, any help would be greatly appreciated
The GetDevToolsProtocolHelper documentation is the 'How to' article on Using Chromium DevTools Protocol in WebView2.
Separately, you cannot use Task.Result with WebView2 tasks which I see you doing in the above code. WebView2 can only be used from the UI thread its created on and requires that UI thread to communicate task completions. You should be able to use await instead.
I am coming back to work on a BOT that scraped data from a site once a day for my personal use.
However they have changed the code during COVID and now it seems they are loading in a lot of the content with Ajax/JavaScript.
I thought that if I did a WebRequest and obtained the response HTML from a URL, it should match the same content that I see in a browser (FF/Chrome) when I right click and "view source". I thought the actual DOM and generated source code would come later when those files were loaded as onload events fired, scripts lazily loaded and so on.
However the source HTML I obtain with my BOT is NOT the same as the HTML I see when viewing the source code. So my regular expressions that find certain links are not available to me.
Why am I seeing a difference between "view source" and a download of the HTML?
I can only think that when the page loads, SCRIPTS run that load other content into the page and that when I view source I am actually seeing a partial generated source rather than the original source code. Therefore is there a way I can call the page with my BOT, wait X seconds before obtaining the response to get this "onload" generated HTML?
Or even better a way for MY BOT (not using someone elses), to view generated source.
This BOT runs as a web service. I can find another site to scrape but it's just painful when I have all the regular expressions working on the source I see, except it's NOT the source my BOT obtains.
A bit confused at why my browser is showing me more content with a view source (not generated source), than my BOT gets when making a valid request.
Any help would be much appreciated this is almost an 8 year project that I have been doing on/off and this change has ruined one of the core parts of the system.
In response to OP's comment, here is the Java code for how to click at different parts on the screen to do this:
You could use Java's Robot class. I just learned about it a few days ago:
// Import
import java.awt.Robot;
// Code
void click(int x, int y, int btn) {
Robot robot = new Robot();
robot.mouseMove(x, y);
robot.mousePress(btn);
robot.mouseRelease(btn);
}
You would then run the click function with the x and y position to click, as well as the button (MouseEvent.BUTTON1, MouseEvent.BUTTON2, etc.)
After stringing together the right positions (this will vary depending on the screen) you could do just about anything.
To use shortcuts, just use the keyPress and keyRelease functions. Here is a good way to do this:
void key(int keyCode, boolean ctrl, boolean alt, boolean shift) {
if (ctrl)
robot.keyPress(KeyEvent.VK_CONTROL);
if (alt)
robot.keyPress(KeyEvent.VK_ALT);
if (shift)
robot.keyPress(KeyEvent.VK_SHIFT);
robot.keyPress(keyCode);
robot.keyRelease(keyCode);
if (ctrl)
robot.keyRelease(KeyEvent.VK_CONTROL);
if (alt)
robot.keyRelease(KeyEvent.VK_ALT);
if (shift)
robot.keyRelease(KeyEvent.VK_SHIFT);
}
Thus, something like Ctrl+Shift+I to open the inspect menu would look like this:
key(KeyEvent.VK_I, true, false, true);
Here are the steps to copy a website's code (from the inspector) with Google Chrome:
Ctrl + Shift + I
Right click the HTML tag
Select "Edit as HTML"
Ctrl + A
Ctrl + C
Then, you can use the technique from this StackOverflow to get the content from the clipboard:
Clipboard c = Toolkit.getDefaultToolkit().getSystemClipboard();
String text = (String) c.getData(DataFlavor.stringFlavor);
Using something like FileOutputStream to put the info into a file:
FileOutputStream output = new FileOutputStream(new File( PATH HERE ));
output.write(text.getBytes());
output.close();
I hope this helps!
I have seemed to have fixed it by just turning on the ability to store cookies in my custom HTTP (Bot/Scraper) class, that was being called from the class trying to obtain the data. Probably the site has a defense system to prevent visitors requesting pages and not the JS/CSS with a different session ID on each request.
However I would like to see some other examples because if it is just cookies then they could use JavaScript to test for JavaScript e.g an AJAX call to log if JS is actually on or some DOM manipulation to determine if you are really Human or not which would break it again.
Every site uses different methods to prevent scrapers, email harvesters, job rapists, link harvesters etc inc working out the standard time between requests for 100% verifiable humans and BOTS and then using those values to help determine spoofed user-agents etc. I wrote a whole system to stop BOTS at my last place of work and its a layered approach, just glad the cookies being enabled solved it on this site but it could easily be beefed up with other tricks to test for BOTS vs HUMANS.
I do know some Java, enough to work out what is going on anyway. My BOT is in C#.
So this is maybe dumb but I am using BitcoinLib for c# and I am trying to get to work this line:
IBitcoinService BitcoinService = new BitcoinService("https://localhost:5051/", "aaa" ,"aaa","vvvv", 5);
What I dont know: What to input there. I tried watching videos or documentation but theres anywhere said what website/password/acc and all to input. Then When I know what to input, how can I mine and then send bitcoins to my wallet? I know this is stupid but I really dont understand how to programate it...
What I tried: I have tried reading a documentation, I have tried watching some videos, downloading demo of app and nothing helped me. Either I am dumb or it's complicated.
Btw: I know how mining and bitcoin works (basics)
Configure your Bitcoin Core wallet properly in bitcoin.conf:
rpcuser = MyRpcUsername
rpcpassword = MyRpcPassword
server=1
txindex=1
Then you can just initiate the BitcoinService like that:
IBitcoinService BitcoinService = new BitcoinService();
and it will work; you don't need to explicitly define them inside the code. If you need to change these parameters in runtime you can do so by calling:
(IBitcoinService).Parameters
I'am running across this issue when I'm debugging or running my coded UI automation project, where i get the exception labeled "{"COM object that has been separated from its underlying RCW cannot be used." System.Exception {System.Runtime.InteropServices.InvalidComObjectException}" everytime i come from a browser window that contains a pdf reader embedded in it. This happens every time I retrieve the window and try to click back. It barfs when i perform the back method on it. I've tried different things but none has worked including the playback wait.
var hereIsmypdf = ReturnPDFDoc();
public BrowserWindow ReturnPDFDoc()
{
Playback.Wait(1000);
var myPdFdoc = GlobalVariables.Browser;
return myPdFdoc;
}
hereIsmypdf.Back();
The only way i was able to get around this issue was not to use the BrowserWindow class. I ended up using the WinWindow class and just getting the tab of the window from it. The BrowserWindow class seemed to trigger the exception "COM object that has been separated from its underlying RCW cannot be used." System.Exception {System.Runtime.InteropServices.InvalidComObjectException}" everytime i tried to retrieve it. I hope this helps someone one or maybe someone has a better way to handle this issue.
For the people that voted my question down, i really did try to figure it out. Sorry i wasnt clear about what i was asking the community or couldn't properly articulate what this pain was. I'm sure someone probably is going through the same pain i did and having a hard time articulating whats going on.
Here is my code on what i ended up doing
public WinTabPage ReturnPDFDoc()
{
WinWindow Wnd = new WinWindow();
Wnd.SearchProperties[BrowserWindow.PropertyNames.ClassName] = "IEFrame";
WinTabList tabRoWlist = new WinTabList(Wnd);
tabRoWlist.SearchProperties[WinTabPage.PropertyNames.Name] = "Tab Row";
WinTabPage myTab = new WinTabPage(tabRoWlist);
myTab.SearchConfigurations.Add(SearchConfiguration.AlwaysSearch);
myTab.SearchProperties[WinTabPage.PropertyNames.Name] = "something";
//UITestControlCollection windows = newWin.FindMatchingControls();
return myTab;
}
I'm trying to do some simple DOM manipulation when a page is rendered as a PDF using ABCPdf. I followed what they document here: http://www.websupergoo.com/helppdf9net/source/5-abcpdf/xhtmloptions/2-properties/usescript.htm
But when I try something as simple as the following:
var doc = new Doc();
doc.HtmlOptions.UseScript = true;
doc.HtmlOptions.UseNoCache = true;
doc.HtmlOptions.PageCachePurge();
doc.HtmlOptions.OnLoadScript = #"var reportElms = document.getElementsByClassName(""report"");";
doc.Page = doc.AddPage();
doc.AddImageUrl(Url.Action("TestPdf", "Pdf", new { }, "http"));
I get the exception:
Unable to render HTML. Unable to apply JScript.
COM error 80020101.
Script 'var reportElms = document.getElementsByClassName("report");'.
Any thoughts as to what I'm doing wrong?
Not even the built in functions work
I'm even getting the same exception with the following script:
doc.HtmlOptions.OnLoadScript = #"
window.ABCpdf_RenderWait();
window.ABCpdf_RenderComplete();";
Btw, I'm using version 8 because that's what we have a licence for.
Edit:
I was missing the .external for the ABCpdf_RenderWait() and ABCpdf_RenderComplete() calls. It works if you reference them properly (imagine that):
doc.HtmlOptions.OnLoadScript = #"
window.external.ABCpdf_RenderWait();
window.external.ABCpdf_RenderComplete();";
Though as I mention in my answer, there are a lot of security hoops that need to be jumped through for IE also.
So I didn't actually get the IE engine to execute JavaScript the way I wanted but I was able to find a solution using the Gecko engine. The original NuGet install did not include the Gecko DLL, so I just downloaded the standalone install and added the DLLs manually.
After that everything worked exactly as expected.
I believe that the IE engine didn't work because it requires a lot of security configuration, because the FAQs spend a lot of time discussing debugging of security: http://www.websupergoo.com/support.htm#6.7