I have read a few questions around and i tried those examples and they worked for me, here are links:
- Link 1
- Link 2
- Link 3
However,
What i'm tryin to accomplish its a little bit different, im trying to modify/upload a .DAT file that contains plain text inside. Im able to read file contents with no problems but i always get 400 Bad Request(Protocol Error). Heres my code.
DocumentsService service = new DocumentsService("Man");
service.setUserCredentials(UserName, Passwod);
DocumentsListQuery fileQuery = new DocumentsListQuery();
fileQuery.Query = User.FileName;
fileQuery.TitleExact = true;
DocumentsFeed feed = service.Query(fileQuery);
//Here I get my file to update
DocumentEntry entry = (DocumentEntry)feed.Entries[0];
// Set the media source
//Here I have tried application/octet-stream also
entry.MediaSource = new MediaFileSource(DataStream, User.FileName, text/plain");
// Instantiate the ResumableUploader component.
ResumableUploader uploader = new ResumableUploader();
// Set the handlers for the completion and progress events
uploader.AsyncOperationCompleted += new AsyncOperationCompletedEventHandler(OnDone);
uploader.AsyncOperationProgress += new syncOperationProgressEventHandler(OnProgress);
ClientLoginAuthenticator authenticator = new ClientLoginAuthenticator("Man", ServiceNames.Documents,Credentials);
Uri updateUploadUrl = new Uri(UploadUrl);
AtomLink aLink = new AtomLink(updateUploadUrl.AbsoluteUri);
aLink.Rel = ResumableUploader.CreateMediaRelation;
entry.Links.Add(aLink);
// Start the update process.
uploader.UpdateAsync(authenticator,entry,new object());
Thanks.
EDIT:
This is how i solved it. Thanks to Claudio for guide me on the proper direction
Download Example Application from : Download Sample (.zip)
Implement to your project from SampleHelper Project these: AuthorizationMgr, INativeAuthorizationFlow, LoopbackServerAuthorizationFlow, WindowTitleNativeAuthorizationFlow
Use it with this code:
//Call Authorization method... (omitted)
File body = new File();
body.Title = title;
body.Description = description;
body.MimeType = mimeType;
byte[] byteArray = System.IO.File.ReadAllBytes(filename);
MemoryStream stream = new MemoryStream(byteArray);
try {
FilesResource.InsertMediaUpload request = service.Files.Insert(body, stream, mimeType);
request.Upload();
File file = request.ResponseBody;
// Uncomment the following line to print the File ID.
// Console.WriteLine("File ID: " + file.Id);
return file;
} catch (Exception e) {
Console.WriteLine("An error occurred: " + e.Message);
return null;
}
Whenever the server returns a 400 Bad Request, it also includes a descriptive error message telling what is invalid in your request. If you can't get to it from the debugger, you should install Fiddler and capture the HTTP request and response.
Another advice I have for you is to start using the newer Drive API instead of the Documents List API. Here is a 5-minute C# quickstart sample showing how to upload a file to Google Drive:
https://developers.google.com/drive/quickstart
Related
I am facing this exception
Error getting value from 'Position' on 'Amazon.Runtime.Internal.Util.MD5Stream'.
when trying to read file from aws s3 bucket.
Here is my c# code
try
{
var s3ObjectPath = $"users/{email.Id}/emailattachments/{item.AttachedFileName}";
var ifExists = await this.Exists(s3ObjectPath);
if (ifExists)
{
Stream attachment = await s3Client.GetObjectStreamAsync(attachmentS3BucketName, s3ObjectPath, dicData);
Attachment att = new Attachment(attachment, item.AttachedFileName);
attachments.Add(att);
}
}
catch (AmazonS3Exception ex)
{
}
However this is working sometimes. I searched everywhere but didn't find solution.
Thanks in advance!!
This is due to an issue with how Streams work. When you open a Stream you are not actually getting the content of the file, rather you are opening a connection enabling you to access the data.
The error you got is returned when your code is trying to access the stream but doesn't have the ability to do so (a connection being lost / no appropriate credentials in the code accessing the stream or any other reason).
The solution is either to solve the underlaying issue and make sure your code still as access to the Stream or to just read from the stream and return a string rather than a stream
The code addition would look something like this:
...
Stream attachment = await s3Client.GetObjectStreamAsync(attachmentS3BucketName, s3ObjectPath, dicData);
StreamReader reader = new StreamReader(attachment);
string attachmentText = reader.ReadToEnd();
Attachment att = new Attachment(attachmentText, item.AttachedFileName);
attachments.Add(att);
I've read through multiple articles regarding this issue however they have all ended up with it not working, or are in vb.net.
What I currently have:
The reports are accessed via a URL which renders them as a PDF and saves them in the downloads folder when the user clicks on a button, these are given generic names such as OrderReport, OrderReport(1)... and so on.
var orderNum = 1;
"http://Server/ReportServer_Name/Pages/ReportViewer.aspx?%2fOrderReport&rs:Command=Render&OrderID=" + orderNum + "&rs:ClearSession=true&rs:Format=PDF"
What I am trying to acheive:
I would like to use C# to fetch this report if possible, and then specify a name for the PDF file and save it in the correct location.
so for example I would like to save this report in a temporary folder for now "C:\temp" with the name OrderID-1. I am using C#
I have added in a ServiceReference into the Project i am using called ReportTestings so the reference is
using ReportTestings;
and the Web Reference URL:
http://Server/ReportServer_Name/ReportExecution2005.asmx
(removed the actual names for security reasons)
so based on all of this information above could someone point me in the right direction or give an example part of code, Thankyou for all that read this post or help
using this code i get this error :(+ e
{"Access to the path 'C:\\Program Files (x86)\\IIS Express\\report1one.pdf' is denied."} System.Exception {System.UnauthorizedAccessException})
code:
ReportExecutionService rs = new ReportExecutionService();
rs.Credentials = new NetworkCredential("username", "password", "domain");
rs.Url = "http://Server/ReportServer_Name/reportexecution2005.asmx";
// Render arguments
byte[] result = null;
string reportPath = "/Invoice";
string format = "PDF";
string historyID = null;
string devInfo = #"<DeviceInfo><Toolbar>False</Toolbar></DeviceInfo>";
// Prepare report parameter.
ParameterValue[] parameters = new ParameterValue[3];
parameters[0] = new ParameterValue();
parameters[0].Name = "InvoiceID";
parameters[0].Value = "2";
DataSourceCredentials[] credentials = null;
string showHideToggle = null;
string encoding;
string mimeType;
string extension;
Warning[] warnings = null;
ParameterValue[] reportHistoryParameters = null;
string[] streamIDs = null;
ExecutionInfo execInfo = new ExecutionInfo();
ExecutionHeader execHeader = new ExecutionHeader();
rs.ExecutionHeaderValue = execHeader;
execInfo = rs.LoadReport(reportPath, historyID);
rs.SetExecutionParameters(parameters, "en-us");
String SessionId = rs.ExecutionHeaderValue.ExecutionID;
Console.WriteLine("SessionID: {0}", rs.ExecutionHeaderValue.ExecutionID);
try
{
result = rs.Render(format, devInfo, out extension, out encoding, out mimeType, out warnings, out streamIDs);
execInfo = rs.GetExecutionInfo();
Console.WriteLine("Execution date and time: {0}", execInfo.ExecutionDateTime);
}
catch (SoapException e)
{
Console.WriteLine(e.Detail.OuterXml);
}
// Write the contents of the report to an MHTML file.
try
{
FileStream stream = File.Create("report1one.pdf", result.Length);
Console.WriteLine("File created.");
stream.Write(result, 0, result.Length);
Console.WriteLine("Result written to the file.");
stream.Close();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
The webservice URL you are using (ReportService2012) is for managing the report server objects.
If you need to render reports, you should be using the ReportExecution2005 webservice.
To get started, you should take a look at the Render method.
To specify credentials you can add the folowing line (I'm the same variable name used in your link: RS2005):
RS2005.Credentials = new System.Net.NetworkCredential("username", "password", "domain");
EDIT:
Your access denied error occurs when your application try to save the file with your web application, so you should use an absolute path or resolve it using Server.MapPath
I recommend an amazingly simple implementation that I found at Crafted For Everyone. The author shows how to add a Service Reference to the SSRS server, and then provides sample code that worked for me on the first try.
It saves the report to a local file, but it is easy to send the report in an email (not included in sample.
I am getting the above exception while trying to upload document into my document library. Here is my code:
spFile = spWeb.Files.Add(
docUrl,
uploadStream,
true, // overwrite or add a new version
uploadMessage.Metadata.CheckInComment ?? "", //check-in comment to use when creating the file in the collection. (can't be NULL)
false);
The issue does not occur while manually uploading the file from the browser but while adding the file in a Document Library from the code using SPWeb.Files.Add the above mentioned exception occurs. Please note that the Content DB of the web application is not full or is not set to Read Only.
I would try doing what you are doing with powershell as outputting the objects there will give you more insight to your problem.
I would do this slightly differently so i could be sure that the file was going exactly where i wanted it to go.
var sp = new SPSite("http://localhost");
var site = sp.OpenWeb();
var folder = site.GetFolder("Documents");
var files = folder.Files;
// Opening a filestream
var fStream = File.OpenRead("C:MyDocument.docx");
var contents = new byte[fStream.Length];
fStream.Read(contents, 0, (int)fStream.Length);
fStream.Close();
// Adding any metadata needed
var documentMetadata = new Hashtable {{"Comments", "Hello World"}};
// Adding the file to the SPFileCollection
var currentFile =
files.Add("Documents/MyDocument.docx", contents, documentMetadata, true);
site.Dispose();
sp.Dispose();
The code isn't written by me but you can find it at the link
Please check out
http://zimmergren.net/technical/how-to-upload-a-filedocument-using-the-sharepoint-object-model
Cheers
Truez
I am pretty much stuck on a problem from last few days. I have a file while is located on a remote server can be access by using userId and password. Well no problem in accessing.
Problem is I have around 150 of them. and each of them is of variable size minimum is 2 MB and max is 3 MB.
I have to read them one by one and read last row/line data from them. I am doing it in my current code.
The main problem is it is taking too much time since it is reading files from top to bottom.
public bool TEst(string ControlId, string FileName, long offset)
{
// The serverUri parameter should use the ftp:// scheme.
// It identifies the server file that is to be downloaded
// Example: ftp://contoso.com/someFile.txt.
// The fileName parameter identifies the local file.
//The serverUri parameter identifies the remote file.
// The offset parameter specifies where in the server file to start reading data.
Uri serverUri;
String ftpserver = "ftp://xxx.xxx.xx.xxx/"+FileName;
serverUri = new Uri(ftpserver);
if (serverUri.Scheme != Uri.UriSchemeFtp)
{
return false;
}
// Get the object used to communicate with the server.
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(serverUri);
request.Credentials = new NetworkCredential("test", "test");
request.Method = WebRequestMethods.Ftp.DownloadFile;
//request.Method = WebRequestMethods.Ftp.DownloadFile;
request.ContentOffset = offset;
FtpWebResponse response = null;
try
{
response = (FtpWebResponse)request.GetResponse();
// long Size = response.ContentLength;
}
catch (WebException e)
{
Console.WriteLine(e.Status);
Console.WriteLine(e.Message);
return false;
}
// Get the data stream from the response.
Stream newFile = response.GetResponseStream();
// Use a StreamReader to simplify reading the response data.
StreamReader reader = new StreamReader(newFile);
string newFileData = reader.ReadToEnd();
// Append the response data to the local file
// using a StreamWriter.
string[] parser = newFileData.Split('\t');
string strID = parser[parser.Length - 5];
string strName = parser[parser.Length - 3];
string strStatus = parser[parser.Length-1];
if (strStatus.Trim().ToLower() != "suspect")
{
HtmlTableCell control = (HtmlTableCell)this.FindControl(ControlId);
control.InnerHtml = strName.Split('.')[0];
}
else
{
HtmlTableCell control = (HtmlTableCell)this.FindControl(ControlId);
control.InnerHtml = "S";
}
// Display the status description.
// Cleanup.
reader.Close();
response.Close();
//Console.WriteLine("Download restart - status: {0}", response.StatusDescription);
return true;
}
Threading:
protected void Page_Load(object sender, EventArgs e)
{
new Task(()=>this.TEst("controlid1", "file1.tsv", 261454)).Start();
new Task(()=>this.TEst1("controlid2", "file2.tsv", 261454)).Start();
}
FTP is not capable of seeking a file to read only the last few lines. Reference: FTP Commands You'll have to coordinate with the developers and owners of the remote ftp server and ask them make an additional file containing the data you need.
Example Ask owners of remote ftp server to create for each of the files a [filename]_lastrow file that contains the last row of the files. Your program would then operate on the [filename]_lastrow files. You'll probably be pleasantly surprised with an accommodating answer of "Ok we can do that for you"
If the ftp server can't be changed ask for a database connection.
You can also download all your files in parallel and start popping them into a queue for parsing when they are done rather than doing this process synchronously. If the ftp server can handle more connections, use as many as would be reasonable for the scenario. Parsing can be done in parallel too.
More reading: System.Threading.Tasks
It's kinda buried, but I placed a comment in your original answer. This SO question leads to this blog post which has some awesome code you can draw from.
Rather than your while loop you can skip directly to the end of the Stream by using Seek. You then want to work your way backwards though the stream until you find the first new line variable. This post should give you everything your need to know.
Get last 10 lines of very large text file > 10GB
FtpWebRequest includes the ContentOffset property. Find/choose a way to keep the offset of the last line (locally or remotely - ie by uploading a 4 byte file to ftp). This is the fastest way to do it and the most optimal for network traffic.
More information about FtpWebRequest can be found at MSDN
I have written a procedure that will open a xls from a local disc, refresh the data in it and then save it again. This works fine.
The problem occurs when I replace the filename to point to a SharePoint site. It opens the file fine. Refreshes the file, but when it trys to save the file it throws an exception with the message "Cannot save as that name. Document was opened as read-only.".
If I try and save the file with a different filename then it works fine.
Does anybody know what I am missing? I think it must have somethoing to do with how I am opening the file. Is there another way that I can force the opening of the file in a read/write manner?
private static void RefreshExcelDocument(string filename)
{
var xls = new Microsoft.Office.Interop.Excel.Application();
xls.Visible = true;
xls.DisplayAlerts = false;
var workbook = xls.Workbooks.Open(Filename: filename, IgnoreReadOnlyRecommended: true, ReadOnly: false);
try
{
// Refresh the data from data connections
workbook.RefreshAll();
// Wait for the refresh occurs - *wish there was a better way than this.
System.Threading.Thread.Sleep(5000);
// Save the workbook back again
workbook.SaveAs(Filename: filename); // This is when the Exception is thrown
// Close the workbook
workbook.Close(SaveChanges: false);
}
catch (Exception ex)
{
//Exception message is "Cannot save as that name. Document was opened as read-only."
}
finally
{
xls.Application.Quit();
xls = null;
}
}
Many thanks in advance for suggestions.
Jonathan
Unfortunately you can't save directly to SharePoint using the Excel API. That's why the file is being opened as read only - it's not allowed.
The good news is that it is possible, but you have to submit the form via a web request. Even better news is that there is sample code on MSDN! In particular notice the PublishWorkbook method that sends a local copy of the Excel file to the server via a web request:
static void PublishWorkbook(string LocalPath, string SharePointPath)
{
WebResponse response = null;
try
{
// Create a PUT Web request to upload the file.
WebRequest request = WebRequest.Create(SharePointPath);
request.Credentials = CredentialCache.DefaultCredentials;
request.Method = "PUT";
// Allocate a 1K buffer to transfer the file contents.
// The buffer size can be adjusted as needed depending on
// the number and size of files being uploaded.
byte[] buffer = new byte[1024];
// Write the contents of the local file to the
// request stream.
using (Stream stream = request.GetRequestStream())
using (FileStream fsWorkbook = File.Open(LocalPath,
FileMode.Open, FileAccess.Read))
{
int i = fsWorkbook.Read(buffer, 0, buffer.Length);
while (i > 0)
{
stream.Write(buffer, 0, i);
i = fsWorkbook.Read(buffer, 0, buffer.Length);
}
}
// Make the PUT request.
response = request.GetResponse();
}
finally
{
response.Close();
}
}
The sample code describes a scenario for the 2007 versions of these products but other versions should behave in the same way.
What does the filename of a failed example looks like? Aren't documents used in SharePoint stored in the database? Or am I getting your problem wrong? Otherwise I could imagine that the file you are trying to store is write protected by the operation system and cannot be modified.