I have the following code running in a windows service:
WebClient webClient = new WebClient();
webClient.Credentials = new NetworkCredential("me", "12345", "evilcorp.com");
webClient.DownloadFile(downloadUrl, filePath);
Each time, I get the following exception
{"The remote server returned an error: (401) Unauthorized."}
With the following inner exception:
{"The function requested is not supported"}
I know for sure the credentials are valid, in fact, if I go to downloadUrl in my web browser and put in my credentials as evilcorp.com\me with password 12345, it downloads fine.
What is weird though is that if I specify my credentials as me#evilcorp.com with 12345, it appears to fail.
Is there a way to format credentials?
webClient.UseDefaultCredentials = true; resolved my issue.
Apparently the OS you are running on matters, as the default encryption has changed between OSes.
This blog has more details: http://ferozedaud.blogspot.com/2009/10/ntlm-auth-fails-with.html
This has apparently also been discussed on stackoverflow here: 407 Authentication required - no challenge sent
I would suggest read the blog first as the distilled knowledge is there.
According to the msdn docs the exception could be because the method has been called simultaneously on multiple threads. The DownloadFile method also requires a completely qualified URL such as http://evilcorp.com/.
For me, 'webClient.UseDefaultCredentials = true;' solves it only on local, not in the web app on the server connecting to another server. I couldn't add the needed credential into Windows as a user, but I found later some programming way - I won't test it as I already made own solution. And also I don't want to mangle with the web server's registry, even if I have the needed admin rights. All these problems are because of the Windows internal handling of the NTLM authentication ("Windows Domain") and all of libraries and frameworks built over that (e.g. .NET).
So the solution for me was quite simple in idea - create a proxy app in a multiplatform technology with a multiplatform NTLM library where the NTLM communication is created by hand according to the public specs, not by running the built-in code in Windows. I myself chose Node.js and the httpntlm library, because it's about only one single source file with few lines and calling it from .NET as a program returning the downloaded file (also I prefer transferring it through the standard output instead of creating a temporary file).
Node.js program as a proxy to download a file behind the NTLM authentication:
var httpntlm = require('httpntlm'); // https://github.com/SamDecrock/node-http-ntlm
//var fs = require('fs');
var login = 'User';
var password = 'Password';
var domain = 'Domain';
var file = process.argv.slice(2); // file to download as a parameter
httpntlm.get({
url: 'https://server/folder/proxypage.aspx?filename=' + file,
username: login,
password: password,
workstation: '',
domain: domain,
binary: true // don't forget for binary files
}, function (err, res/*ponse*/) {
if (err) {
console.log(err);
} else {
if (res.headers.location) { // in my case, the server redirects to a similar URL,
httpntlm.get({ // now containing the session ID
url: 'https://server' + res.headers.location,
username: login,
password: password,
workstation: '',
domain: domain,
binary: true // don't forget for binary files
}, function (err, res) {
if (err) {
console.log(err);
} else {
//console.log(res.headers);
/*fs.writeFile("434980.png", res.body, function (err) { // test write
if (err) // to binary file
return console.log("Error writing file");
console.log("434980.png saved");
});*/
console.log(res.body.toString('base64')); // didn't find a way to output
} // binary file, toString('binary')
}); // is not enough (docs say it's
// just 'latin1')...
} else { // if there's no redirect
//console.log(res.headers); // ...so I output base64 and
console.log(res.body.toString('base64')); // convert it back in the caller
} // code
}
});
.NET caller code (the web app downloading files from a web app on another server)
public static string ReadAllText(string path)
{
if (path.StartsWith("http"))
return System.Text.Encoding.Default.GetString(ReadAllBytes(path));
else
return System.IO.File.ReadAllText(path);
}
public static byte[] ReadAllBytes(string path)
{
if (path.StartsWith("http"))
{
ProcessStartInfo psi = new ProcessStartInfo();
psi.FileName = "node.exe"; // Node.js installs into the PATH
psi.Arguments = "MyProxyDownladProgram.js " +
path.Replace("the base URL before the file name", "");
psi.WorkingDirectory = "C:\\Folder\\With My\\Proxy Download Program";
psi.UseShellExecute = false;
psi.CreateNoWindow = true;
psi.RedirectStandardInput = true;
psi.RedirectStandardOutput = true;
psi.RedirectStandardError = true;
Process p = Process.Start(psi);
byte[] output;
try
{
byte[] buffer = new byte[65536];
using (var ms = new MemoryStream())
{
while (true)
{
int read = p.StandardOutput.BaseStream.Read(buffer, 0, buffer.Length);
if (read <= 0)
break;
ms.Write(buffer, 0, read);
}
output = ms.ToArray();
}
p.StandardOutput.Close();
p.WaitForExit(60 * 60 * 1000); // wait up to 60 minutes
if (p.ExitCode != 0)
throw new Exception("Exit code: " + p.ExitCode);
}
finally
{
p.Close();
p.Dispose();
}
// convert the outputted base64-encoded string to binary data
return System.Convert.FromBase64String(System.Text.Encoding.Default.GetString(output));
}
else
{
return System.IO.File.ReadAllBytes(path);
}
}
Hmm. Lots of answers, but I wonder if answering your last question would have solved everything. "me" is not an authorization type (unless your server has added support for it, of course!). You probably want "Basic".
Also keep in mind that some webservices require you to send the authorization header on the initial request, and this won't do that. Rather it responds with it after getting an authorization required response from the server. If you need this, you need to create your own Authorization header.
String basicToken = Base64Encoding.EncodeStringToBase64(String.Format("{0}:{1}", clientId.Trim(), clientSecret.Trim()));
webClient.Headers.Add("Authorization", String.Format("Basic {0}", basicToken));
And of course as people have pointed out, setting UseDefaultCredentials to true works if you are using IIS (or other windows security aware http server) in a windows environment.
Related
TL;DR: I am grasping for straws here, anybody got a SSO with CefSharp working and can point me to what I am doing wrong? I try to connect to a SSL-SSO page through CefSharp but it wont work - neither does it in Chrome-Browser. With IE it just works. I added the to trusted sites (Proxy/Security), I tried to tried to whitelist-policy the URL for chrome in the registry and tried different CefSharp settings - nothing helped.
I am trying (to no avail) to connect to a SSO enabled page via CefSharp-Offline-browsing.
Browsing with normal IE it just works:
I get 302 answer
the redirected site gives me a 401 (Unauthorized) with NTLM, Negotiate
IE automagically sends the NTLM Auth and receives a NTLM WWW-Authenticate
after some more 302 it ends in 200 and a logged in state on the website
Browsing with Chrome 69.0.3497.100 fails:
I guess this is probably due to the fact that the webserver is setup on a co-workers PC and uses a self-signed cert.
F12-Debugging in IE/Chrome:
In IE I see a 302, followed by two 401 answers, and end on the logged in site.
In chrome I see only 302 and 200 answers and end on the "fallback" login site for user/pw entry.
The main difference in (one of the 302) request headers is NEGOTIATE vs NTLM
// IE:
Authorization: NTLM TlRMT***==
// Chrome:
Authorization: Negotiate TlRMT***==
Upgrade-Insecure-Requests: 1
DNT: 1
No luck to connect through CefSharp so far, I simply land in its RequestHandler.GetAuthCredentials() - I do not want to pass any credentials with that.
What I tried to get it working inside Windows / Chrome:
installed the self-signed cert as "trusted certificate authorities"
added the co-workers host to the Windows Internet Proxy settings as trusted site
added the co-workers host to Software\Policies\Google\Chrome\ registry as
https://dev.chromium.org/administrators/policy-list-3#AuthServerWhitelist
https://dev.chromium.org/administrators/policy-list-3#AuthNegotiateDelegateWhitelist
which all in all did nothing: I still do not get any SSO using Chrome:
What I tried to get it working inside CefSharp:
deriving from CefSharp.Handler.DefaultRequestHandler, overriding
OnSelectClientCertificate -> never gets called
OnCertificateError -> no longer gets called
GetAuthCredentials -> gets called, but I do not want to pass login credentials this way - I already have a working solution for the http:// case when calling the sites normal login-page.
providing a settings object to Cef.Initialize(...) that contains
var settings = new CefSettings { IgnoreCertificateErrors = true, ... };
settings.CefCommandLineArgs.Add ("auth-server-whitelist", "*host-url*");
settings.CefCommandLineArgs.Add ("auth-delegate-whitelist", "*host-url*");
on creation of the browser providing a RequestContext:
var browser = new CefSharp.OffScreen.ChromiumWebBrowser (
"", requestContext: CreateNewRequestContext (webContext.Connection.Name));
CefSharp.RequestContext CreateNewRequestContext (string connName)
{
var subDirName = Helper.Files.FileHelper.MakeValidFileSystemName (connName);
var contextSettings = new RequestContextSettings
{
PersistSessionCookies = false,
PersistUserPreferences = false,
CachePath = Path.Combine (Cef.GetGlobalRequestContext ().CachePath, subDirName),
IgnoreCertificateErrors = true,
};
// ...
return new CefSharp.RequestContext (contextSettings);
}
I am aware that part of those changes are redundant (f.e. 3 ways to set whitelists of which at least 2 should work for CefSharp, not sure about the registry one affecting it) and in case of IgnoreCertificateErrors dangerous and can't stay in. I just want it to work somehow to then trim back what to do to make it work in production.
Research:
https://learn.microsoft.com/en-us/windows/desktop/SecAuthN/microsoft-ntlm
https://www.chromium.org/developers/design-documents/http-authentication
https://www.magpcss.org/ceforum/viewtopic.php?f=6&t=11085 leading to
https://bitbucket.org/chromiumembedded/cef/issues/1150/ntlm-authentication-issue (fixed 2y ago)
https://sysadminspot.com/windows/google-chrome-and-ntlm-auto-logon-using-windows-authentication/
https://productforums.google.com/forum/#!msg/chrome/1594XUaOVKY/8ChGCBrwYUYJ
and others .. still none the wiser.
Question: I am grasping for straws here , anybody got a SSO with CefSharp working and can point me to what I am doing wrong?
TL;DR: I faced (at least) 2 problems: invalid SSL certificates and Kerberos token problems. My test setup has local computers set up with a web-server I call into. These local computers are mostly windows client OS VMs with self-signed certificates. Some are windows servers. The latter worked, the fromer not. With IE both worked.
Browsing to the site in question using https://... lead to CEFsharp encountering the self-signed certificate (which is not part of a trusted chain of certs) - therefore it will call the browsers RequestHandler (if set) and call into its
public override bool OnCertificateError (IWebBrowser browserControl, IBrowser browser,
CefErrorCode errorCode, string requestUrl,
ISslInfo sslInfo, IRequestCallback callback)
{
Log.Logger.Warn (sslInfo.CertStatus.ToString ());
Log.Logger.Warn (sslInfo.X509Certificate.Issuer);
if (CertIsTrustedEvenIfInvalid (sslInfo.X509Certificate))
{
Log.Logger.Warn ("Trusting: " + sslInfo.X509Certificate.Issuer);
if (!callback.IsDisposed)
using (callback)
{
callback?.Continue (true);
}
return true;
}
else
{
return base.OnCertificateError (browserControl, browser, errorCode, requestUrl,
sslInfo, callback);
}
}
For testing purposes I hardcoded certain tests into CertIsTrustedEvenIfInvalid (sslInfo.X509Certificate) that would return true for my test environment - this might be replaced by a simple return false, an UI-Popup presenting the cert and asking the user if she wants to proceed or it might take certain user-provided cert-files into account - dunno yet:
bool CertIsTrustedEvenIfInvalid (X509Certificate certificate)
{
var debug = new Dictionary<string, HashSet<string>> (StringComparer.OrdinalIgnoreCase)
{
["cn"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "some", "data" },
["ou"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "other", "stuff" },
["o"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "..." },
["l"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "Atlantis" },
["s"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "Outer Space" },
["c"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "whatsnot" },
};
var x509issuer = certificate.Issuer
.Split (",".ToCharArray ())
.Select (part => part.Trim().Split("=".ToCharArray(), 2).Select (p => p.Trim()))
.ToDictionary (t => t.First (), t => t.Last ());
return x509issuer.All (kvp => debug.ContainsKey (kvp.Key) &&
debug[kvp.Key].Contains (kvp.Value));
}
Only if the SSL-Step works, SSO will be tried.
After solving the SSL issue at hand I ran into different behavious of Chrome versus IE/Firefox etc as described here # Choosing an authentication scheme
- the gist of it is:
if multiple auth schemes are reported by the server, IE/Firefox use the first one they know - as delivered by the server (preference by order)
Chrome uses the one which it deems of highest priority (in order: Negotiate -> NTLM -> Digest->Basic) ignoring the servers ordering of alternate schemes.
My servers reported NTLM,Negotiante (that order) - with IE it simply worked.
With Chrome this led to Kerberos tokens being exchanged - which only worked when the web-server was hosted on a Windows Server OS - not for Windows Client OS. Probably some kind of failed configuration for Client-OS computers in the AD used. Not sure though - but against Server OS it works.
Additionaly I implemented the
public override bool GetAuthCredentials (IWebBrowser browserControl, IBrowser browser,
IFrame frame, bool isProxy, string host,
int port, string realm, string scheme,
IAuthCallback callback)
{
// pseudo code - asks for user & pw
(string UserName, string Password) = UIHelper.UIOperation (() =>
{
// UI to ask for user && password:
// return (user,pw) if input ok else return (null,null)
});
if (UserName.IsSet () && Password.IsSet ())
{
if (!callback.IsDisposed)
{
using (callback)
{
callback?.Continue (UserName, Password);
}
return true;
}
}
return base.GetAuthCredentials (browserControl, browser, frame, isProxy,
host, port, realm, scheme, callback);
}
to allow for a fail-back if the SSO did not work out. After providing the AD credentials in this dialog login is possible as well).
For good measure I also whitelisted the hosts to the CEF-Browser context on creation of a new broswer like so:
CefSharp.RequestContext CreateNewRequestContext (string subDirName, string host,
WebConnectionType conType)
{
var contextSettings = new RequestContextSettings
{
PersistSessionCookies = false,
PersistUserPreferences = false,
CachePath = Path.Combine (Cef.GetGlobalRequestContext ().CachePath, subDirName),
};
var context = new CefSharp.RequestContext (contextSettings);
if (conType == WebConnectionType.Negotiate) # just an enum for UserPW + Negotiate
Cef.UIThreadTaskFactory.StartNew (() =>
{
// see https://cs.chromium.org/chromium/src/chrome/common/pref_names.cc for names
var settings = new Dictionary<string, string>
{
["auth.server_whitelist"] = $"*{host}*",
["auth.negotiate_delegate_whitelist"] = $"*{host}*",
// only set-able via policies/registry :/
// ["auth.schemes"] = "ntlm" // "basic", "digest", "ntlm", "negotiate"
};
// set the settings - we *trust* the host with this and allow negotiation
foreach (var s in settings)
if (!context.SetPreference (s.Key, s.Value, out var error))
Log.Logger.Debug?.Log ($"Error setting '{s.Key}': {error}");
});
return context;
}
Getting 401(authorised) while making web api controller call
public bool CheckCarrierSCAC(int carrierID)
{
bool carrierScacSatus = false;
carrierSCAC = new BackOfficeViewController().GetSCACCodeBYCarrierID(carrierID);
logger.LogMessage(message: string.Format("Credentials {0}{1}", ConfigurationManager.AppSettings["HermesUserName"], ConfigurationManager.AppSettings["HermesPassword"]), logDate: true);
Http.Preauthenticate = true;
string serviceUrl = string.Format("{0}/CarrierSCAC?carrier={1}", ConfigurationManager.AppSettings["GatewayInterface"], carrierSCAC);
logger.LogMessage(message: string.Format("Check Carrier SCAC Service URL {0} ", serviceUrl), logDate: true);
try
{
carrierScacSatus = Http.Get<bool>(uri: serviceUrl, cookieContainer: null, contentType: "application/json");
}
catch (Exception exception)
{
logger.LogException(exception, message: "error while check Carrier Scac =" + exception.Message);
}
return carrierScacSatus;
}
I have already used preauthentication still getting same error
Setting Http.Preauthenticate = true just tells the web request to send the Authorization header to that Uri going forward, assuming it's a pass-through to .NET's HttpWebRequest.Preauthenticate. In this case, you don't appear to actually be providing any credentials to the web request. The only case where you reference the credentials is in the logger message.
The Http.Get<T> method should allow you to provide either raw Header values (in which case you'll need to add your own Authorization header) or the credentials so that it creates the header for you. (This appears to be a library that wraps the C# WebRequest or some similar connection library, so you'll need to check it's documentation for specific details).
How can I add a new document to Content Server 10.5 using the REST api?
I am following the Swagger docs for creating a node, but it is not clear how I attach the file to the request. Here is (roughly) the code I am using:
var folderId = 2000;
var docName = "test";
var uri = $"http://[serverName]/otcs/llisapi.dll/api/v1/nodes?type=144&parent_id={folderId}&name={docName}";
var request = new HttpRequestMessage();
request.Headers.Add("Connection", new[] { "Keep-Alive" });
request.Headers.Add("Cache-Control", "no-cache, no-store, must-revalidate");
request.Headers.Add("Pragma", "no-cache");
request.Headers.Add("OTCSTicket", /* ticket here */);
request.RequestUri = new Uri(uri);
request.Method = HttpMethod.Post;
request.Content = new ByteArrayContent(data);
request.Content.Headers.ContentType = new MediaTypeHeaderValue(MimeMapping.GetMimeMapping(filePath));
request.Headers.ExpectContinue = false;
var httpClientHandler = new HttpClientHandler
{
Proxy = WebRequest.GetSystemWebProxy(),
UseProxy = true,
AllowAutoRedirect = true
};
using (var client = new HttpClient(httpClientHandler))
{
var response = client.SendAsync(request).Result;
IEnumerable<string> temp;
var vals = response.Headers.TryGetValues("OTCSTicket", out temp) ? temp : new List<string>();
if (vals.Any())
{
this.ticket = vals.First();
}
return response.Content.ReadAsStringAsync().Result;
}
I've been searching through the developer.opentext.com forums, but finding a complete example in c# is proving tough - there are a few examples in javascript, but attempting to replicate these in c# or via chrome or firefox extensions just give the same results. Calling other CS REST methods has not been an issue so far, this is the first one that's giving me problems.
Edit: I pasted the wrong url into my question, which I've now fixed. It was var uri = $"http://[serverName]/otcs/llisapi.dll/api/v1/forms/nodes/create?type=0&parent_id={folderId}&name={docName}";.
Your URL doesn't look like the REST API, it's rather the traditional URL used for the UI.
This article should describe how to do what you want to do:
https://developer.opentext.com/webaccess/#url=%2Fawd%2Fresources%2Farticles%2F6102%2Fcontent%2Bserver%2Brest%2Bapi%2B%2Bquick%2Bstart%2Bguide&tab=501
EDITED:
Ok, so that's how it should work:
send a POST to http://www.your_content_server.com/cs[.exe]/api/v1/nodes
send this in your payload to create a document in your enterprise workspace
type=144
parent_id=2000
name=document_name.txt
<file>
A incomplete demo in Python would look like this. Make sure you get a valid ticket first.
files = {'file': (open("file.txt", 'rb')}
data = { 'type': 144, 'parent_id': 2000, 'name': 'document_name.txt' }
cs = requests.post(url, headers={'otcsticket':'xxxxxxx'}, data=data, files=files)
if cs.status_code == 200:
print "ok"
else:
print cs.text
You will need a form input to get the file onto the page then you can use filestreams to redirect it, there is great guide for that here.
Reading files in JavaScript using the File APIs
Here is a Jquery/ Ajax example.
I find the best way to go about this is to use Postman (Chrome Plugin) to experiment until you get comfortable.
var form = new FormData();
form.append("file", "*filestream*");
form.append("parent_id", "100000");
form.append("name", "NameYourCreatedFile");
form.append("type", "144");
var settings = {
"async": true,
"url": "/cs.exe/api/v1/nodes", // You will need to amend this to match your environment
"method": "POST",
"headers": {
"authorization": "Basic **use Postman to generate this**",
"cache-control": "no-cache",
},
"processData": false,
"contentType": false,
"mimeType": "multipart/form-data",
"data": form
}
$.ajax(settings).done(function (response) {
console.log(response);
});
It appears that the OpenText API only supports file uploads through asynchronous JavaScript uploads - not through traditional file uploads by using typical posted form requests that contain the files contents (which is pretty crappy to be honest - as this would be the easiest to handle on server side).
I've contacted their support and they were absolutely no help - they said since it's working with JavaScript, then they can't help me. Anyone else utilizing any language besides JavaScript is SOL. I submitted my entire API package, but they didn't bother investigating and wanted to close my ticket ASAP.
The only way I've found to do this, is to upload / send the file into your 'Upload' directory on your Content Servers web server (on ours it was set to D:\Upload).This directory location is configurable in the admin section.
Once you've sent the file to your web server, send a create node request with the file param set to the full file path of the file residing on your server, as the OpenText API will attempt to retrieve the file from this directory.
I've created a PHP API for this, and you can browse its usage here:
https://github.com/FBCLIT/OpenTextApi
<?php
use Fbcl\OpenTextApi\Client;
$client = new Client('http://server.com/otcs/cs.exe', 'v1');
$api = $client->connect('username', 'secret');
try {
// The folder node ID of where the file will be created under.
$parentNodeId = '12356';
// The file name to display in OpenText
$fileName = 'My Document.txt';
// The actual file path of the file on the OpenText server.
$serverFilePath = 'D:\Upload\My Document.txt';
$response = $api->createNodeDocument($parentNodeId, $fileName, $serverFilePath);
if (isset($response['id'])) {
// The ID of the newly created document will be returned.
echo $response['id'];
}
} catch (\Exception $ex) {
// File not found on server drive, or issue creating node from given parent.
}
MIME Type detection appears to happen automatically, and you do not need to send anything for it to detect the file type. You can name the file to whatever you like without an extension.
I have also discovered that you cannot use an IP address or Host name for uploading files in this manor. You must enter a path that is local to the server you are uploading to. You can however give just the file name that exists in the Upload directory, and the OpenText API seems to locate it fine.
For example, you can pass either D:\Uploads\Document.txt or Document.txt.
If you haven't done it correctly, you should get the error:
Client error: POST http://server.com/otcs/cs.exe/api/v1/nodes resulted in a 400 Bad Request response: {"error":"Error: File could not be found within the upload directory."}
I've been trying to get the count of unread emails in Gmail but I'm encountering some problems. I did a search and I found ImapX library that should help me achieve this, but the code I found here on StackOverFlow on previews questions doesn't work. This is my code right now:
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
string username = "my_email#gmail.com";
string passwd = "my_pass";
int unread = 0;
ImapClient client = new ImapClient("imap.gmail.com", 993, true);
bool result = client.IsConnected;
if (result)
Console.WriteLine("Connection Established");
result = client.Login(username, passwd); // <-- Error here
if (result)
{
Console.WriteLine("Logged in");
FolderCollection folders = client.Folders;
// Message messages = client.Folders["INBOX"].Messages;
foreach (ImapX.Message m in client.Folders["INBOX"].Messages)
{
if (m.Seen == false)
unread++;
}
Console.WriteLine(unread);
}
}
}
}
The Error is:
The selected authentication mechanism is not supported" on line 26
which is result = client.Login(username, passwd);
Sample from ImapX:
var client = new ImapX.ImapClient("imap.gmail.com", 993, true);
client.Connection();
client.LogIn(userName, userPassword);
var messages = client.Folders["INBOX"].Search("ALL", true);
Maybe you have enabled Two-factor authentication and you need generate application password. Othery way you will receive an e-mail warning that something is trying to access your mailbox and you must add your application as an exception. As an alternate solution you can try https://github.com/jstedfast/MailKit sample code at the bottom of the README
Most likely, gmail is looking for you to do a STARTTLS command, which it appears ImapX does not support. If you look at the response to the IMAPX1 CAPABILITY request, you'll likely see a "LOGINDISABLED" element, which means the server won't accept the "LOGIN" statement yet. So even if you UseSSL, the server (Microsoft Exchange in my case) is still looking for the STARTTLS command before it will let me LOGIN.
You have to make the connection.
ImapClient client = new ImapClient("imap.gmail.com", 993, true);
client.Connect();
bool result = client.IsConnected;
So if you added the line client.Connect(), I think it solves your problem.
I'm trying to update a user's Twitter status from my C# application.
I searched the web and found several possibilities, but I'm a bit confused by the recent (?) change in Twitter's authentication process. I also found what seems to be a relevant StackOverflow post, but it simply does not answer my question because it's ultra-specific regading a code snippet that does not work.
I'm attempting to reach the REST API and not the Search API, which means I should live up to the stricter OAuth authentication.
I looked at two solutions. The Twitterizer Framework worked fine, but it's an external DLL and I would rather use source code. Just as an example, the code using it is very clear and looks like so:
Twitter twitter = new Twitter("username", "password");
twitter.Status.Update("Hello World!");
I also examined Yedda's Twitter library, but this one failed on what I believe to be the authentication process, when trying basically the same code as above (Yedda expects the username and password in the status update itself but everything else is supposed to be the same).
Since I could not find a clear cut answer on the web, I'm bringing it to StackOverflow.
What's the simplest way to get a Twitter status update working in a C# application, without external DLL dependency?
Thanks
If you like the Twitterizer Framework but just don't like not having the source, why not download the source? (Or browse it if you just want to see what it's doing...)
I'm not a fan of re-inventing the wheel, especially when it comes to products that already exist that provide 100% of the sought functionality. I actually have the source code for Twitterizer running side by side my ASP.NET MVC application just so that I could make any necessary changes...
If you really don't want the DLL reference to exist, here is an example on how to code the updates in C#. Check this out from dreamincode.
/*
* A function to post an update to Twitter programmatically
* Author: Danny Battison
* Contact: gabehabe#hotmail.com
*/
/// <summary>
/// Post an update to a Twitter acount
/// </summary>
/// <param name="username">The username of the account</param>
/// <param name="password">The password of the account</param>
/// <param name="tweet">The status to post</param>
public static void PostTweet(string username, string password, string tweet)
{
try {
// encode the username/password
string user = Convert.ToBase64String(System.Text.Encoding.UTF8.GetBytes(username + ":" + password));
// determine what we want to upload as a status
byte[] bytes = System.Text.Encoding.ASCII.GetBytes("status=" + tweet);
// connect with the update page
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://twitter.com/statuses/update.xml");
// set the method to POST
request.Method="POST";
request.ServicePoint.Expect100Continue = false; // thanks to argodev for this recent change!
// set the authorisation levels
request.Headers.Add("Authorization", "Basic " + user);
request.ContentType="application/x-www-form-urlencoded";
// set the length of the content
request.ContentLength = bytes.Length;
// set up the stream
Stream reqStream = request.GetRequestStream();
// write to the stream
reqStream.Write(bytes, 0, bytes.Length);
// close the stream
reqStream.Close();
} catch (Exception ex) {/* DO NOTHING */}
}
Another Twitter library I have used sucessfully is TweetSharp, which provides a fluent API.
The source code is available at Google code. Why don't you want to use a dll? That is by far the easiest way to include a library in a project.
The simplest way to post stuff to twitter is to use basic authentication , which isn't very strong.
static void PostTweet(string username, string password, string tweet)
{
// Create a webclient with the twitter account credentials, which will be used to set the HTTP header for basic authentication
WebClient client = new WebClient { Credentials = new NetworkCredential { UserName = username, Password = password } };
// Don't wait to receive a 100 Continue HTTP response from the server before sending out the message body
ServicePointManager.Expect100Continue = false;
// Construct the message body
byte[] messageBody = Encoding.ASCII.GetBytes("status=" + tweet);
// Send the HTTP headers and message body (a.k.a. Post the data)
client.UploadData("http://twitter.com/statuses/update.xml", messageBody);
}
Try LINQ To Twitter. Find LINQ To Twitter update status with media complete code example that works with Twitter REST API V1.1. Solution is also available for download.
LINQ To Twitter Code Sample
var twitterCtx = new TwitterContext(auth);
string status = "Testing TweetWithMedia #Linq2Twitter " +
DateTime.Now.ToString(CultureInfo.InvariantCulture);
const bool PossiblySensitive = false;
const decimal Latitude = StatusExtensions.NoCoordinate;
const decimal Longitude = StatusExtensions.NoCoordinate;
const bool DisplayCoordinates = false;
string ReplaceThisWithYourImageLocation = Server.MapPath("~/test.jpg");
var mediaItems =
new List<media>
{
new Media
{
Data = Utilities.GetFileBytes(ReplaceThisWithYourImageLocation),
FileName = "test.jpg",
ContentType = MediaContentType.Jpeg
}
};
Status tweet = twitterCtx.TweetWithMedia(
status, PossiblySensitive, Latitude, Longitude,
null, DisplayCoordinates, mediaItems, null);
Try TweetSharp . Find TweetSharp update status with media complete code example works with Twitter REST API V1.1. Solution is also available for download.
TweetSharp Code Sample
//if you want status update only uncomment the below line of code instead
//var result = tService.SendTweet(new SendTweetOptions { Status = Guid.NewGuid().ToString() });
Bitmap img = new Bitmap(Server.MapPath("~/test.jpg"));
if (img != null)
{
MemoryStream ms = new MemoryStream();
img.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
ms.Seek(0, SeekOrigin.Begin);
Dictionary<string, Stream> images = new Dictionary<string, Stream>{{"mypicture", ms}};
//Twitter compares status contents and rejects dublicated status messages.
//Therefore in order to create a unique message dynamically, a generic guid has been used
var result = tService.SendTweetWithMedia(new SendTweetWithMediaOptions { Status = Guid.NewGuid().ToString(), Images = images });
if (result != null && result.Id > 0)
{
Response.Redirect("https://twitter.com");
}
else
{
Response.Write("fails to update status");
}
}
Here's another solution with minimal code using the excellent AsyncOAuth Nuget package and Microsoft's HttpClient. This solution also assumes you're posting on your own behalf so you have your access token key/secret already, however even if you don't the flow is pretty easy (see AsyncOauth docs).
using System.Threading.Tasks;
using AsyncOAuth;
using System.Net.Http;
using System.Security.Cryptography;
public class TwitterClient
{
private readonly HttpClient _httpClient;
public TwitterClient()
{
// See AsyncOAuth docs (differs for WinRT)
OAuthUtility.ComputeHash = (key, buffer) =>
{
using (var hmac = new HMACSHA1(key))
{
return hmac.ComputeHash(buffer);
}
};
// Best to store secrets outside app (Azure Portal/etc.)
_httpClient = OAuthUtility.CreateOAuthClient(
AppSettings.TwitterAppId, AppSettings.TwitterAppSecret,
new AccessToken(AppSettings.TwitterAccessTokenKey, AppSettings.TwitterAccessTokenSecret));
}
public async Task UpdateStatus(string status)
{
try
{
var content = new FormUrlEncodedContent(new Dictionary<string, string>()
{
{"status", status}
});
var response = await _httpClient.PostAsync("https://api.twitter.com/1.1/statuses/update.json", content);
if (response.IsSuccessStatusCode)
{
// OK
}
else
{
// Not OK
}
}
catch (Exception ex)
{
// Log ex
}
}
}
This works on all platforms due to HttpClient's nature. I use this method myself on Windows Phone 7/8 for a completely different service.