I'm writing small client-side blazor app. To avoid using any DB (and creating whole API for it), as I have basically just a list of single class objects, I wanted to use json file.
I know I can read from file:
forecasts = await Http.GetFromJsonAsync<WeatherForecast[]>("sample-data/weather.json");
But when I modify this "forecasts" list and try to do something like this:
await Http.PostAsJsonAsync("sample-data/weather.json", forecasts);
or
await Http.PostAsJsonAsync<WeatherForecast[]>("sample-data/weather.json", forecasts);
It is not saving changes. I checked in browser and it is doing a POST with correct data, but receives 404 in return. Is it possible to save such a data after modification?
What you are trying to do is to change a file on server from a application that lives on the users browser. When the user visits you blazor site all of the files (except the ones that are set for lazy loading) are downloaded into the user browser. After the applications lives and runs in the users browser.
If you don't need any concurrency check and real database capabilities you should build some simple server that allows static file access or other option is to use simple blob access from AWS S3 or Azure Blob Storage.
You can look up here the difference between client and server side Blazor.
If you need to share persistent data between multiple clients you will need to implement a server or service that will allow this for you if you use client side Blazor.
Related
Is there a alternative for the Microsoft Azure C# API. I want to download files from blob urls but a microsoft azure storage account is not for free, so I cannot use it. So is there any other API or way to download blobs?
Example for Blob-Url: blob:https://flex.aniflex.org/afbf9776-76ea-47dc-9951-2fadafc3adff
Caution: I'm not the hoster of the file. So I don't want to download the file from my own storage account.
#kaskorian I guess you're referring to browsers file blobs...Blob URL/Object URL is a pseudo protocol to allow Blob and File objects to be used as URL source for things like images, download links for binary data and so forth.
Blob URLs can only be generated internally by the browser. URL.createObjectURL() will create a special reference to the Blob or File object which later can be released using URL.revokeObjectURL(). These URLs can only be used locally in the single instance of the browser and in the same session (ie. the life of the page/document).
For example, you can not hand an Image object raw byte-data as it would not know what to do with it. It requires for example images (which are binary data) to be loaded via URLs. This applies to anything that require an URL as source. Instead of uploading the binary data, then serve it back via an URL it is better to use an extra local step to be able to access the data directly without going via a server.
I have been looking at exporting data from a company SharePoint site using C# written in VS Express 2013. First, a caveat - I'm new to web based APIs (Soap or REST) and SharePoint, so apologies if my question is prosaically easy to answer/ badly worded. Pretty much all of my previous work has been with files local to the machine or on a similarly local company network that can be accessed in the same way.
My aim - download a list from one site, do stuff to it on the client machine and then re-upload it to a different SharePoint site.
I have tried using the Soap API (Client object model) but I am encountering a variety of access and permissions issues. So I switched to the REST API, and have now managed to get the list data into XML within my browser. But I don't really want it in my browser - I want to access it programmatically, and write selected data from the list into a local file (using System.IO.Path.GetTempPath() to find the temporary folder), without browser windows popping up (except to allow the user to log in to establish an authorized context with the server). There has to be some trivially easy way of saving the XML data to my temporary file without needing a browser open, but I haven't been able to find it.
My REST query is like the following:
https://sk.someSharePointSite/sites/subsection/_vti_bin/ListData.svc/AList
I would suggest you should use Sharepoint client object model for downloading/uploading files to sharepoint.
If there is anything specific you can't do with CSOM then use REST API.
You can check the code details here:
http://msdn.microsoft.com/en-us/library/ee956524%28office.14%29.aspx
I've stored some strings in web storage (session and/or local), and am wondering if it is possible to check for such stored strings on page load or init on the server-side (asp.net c# in my case)... So, for example, I will know not to re-fetch data from the db and use what is already resident in the browser from the last page load.
No, that's not possible. sessionStorage lives on the client. If you want to access it on the server you will have to write javascript that reads the value, puts it in a hidden field so that it is sent to the server. Or javascript that will read the value from the storage and redirect to the server passing it as query string parameter. There's absolutely no way for the server accessing directly this storage. That's one of the drawbacks of sessionStorage vs cookies.
I am trying to build a system where, when the user logs in to an online ASP.NET site, the User ID (that is placed in a session variable) is then stored on to a local application (c# application) for later use.
So far, the only way I can think of doing this is, when the user logs in, the User ID is stored in a text file on the client's machine. But, this would not be an ideal solution, as this means the local client application must then check the file to make sure the contents have not changed (almost every second as it is important that the client Application always has the correct User ID).
Any suggestions?
When you say "the User ID is stored in a text file on the client's machine" I deduce you mean cookies, because you simply can´t store files on client machines via web applications, unless there is some sort of ActiveX control involved.
You can perfectly store a cookie with the User Id on the client and access it with your console app, but this is not very reliable, as the user can have cookies disabled or he can clean the cookies folder and also because different browsers use different folders for storing cookies.
So my choice would rather be a storing the current logged users in a database and make the console app poll that info through a WCF service.
If you don´t want to use a Database, store an XML file on the server that could act as your database and use for example LINQ to XML to retrieve the data via that WCF service.
Other option we can equate instead of polling the info you could use WCF Duplex Services and make the WebService push that info to the client apps once a user logs in.
I need to let a company push information up to my site.
The best way to explain what I am talking about is to explain how it is currently done with their previous website:
This company uploads a CSV file to an FTP set up by the website. The website then processes the CSV file and puts it into an SQL database so that it can be used by the website.
In this case, I am the website and I am working with the company. Both sides are willing to change what they do. So my question is...
What is the best way to accept batch information like this? Is there a more automated way that doesn't involve FTP? In the future I may have a lot of companies wanting to do this, and I'd hate to have to setup accounts for each one.
The project is C# ASP.NET MSSQL
Let me know if you need more information...
Set up a web service to accept incoming data. That way you can validate immediately and reject bad data before it ever gets into your system.
If you want to eliminate FTP, you could allow them to upload files to your site leveraging using FileUpload. Once the file is uploaded you can do your server side processing.
EDIT: From the OP's comment's it seems to be an automated process. That said, if their process generates the file, you could:
Allow them to continue their current process which would involve them generating their file and placing it somewhere where it could be accessed via a URI with authentication, you could access this file on a schedule and process it. From what it seems right now they generate a file and upload it to your FTP server, so there seems to a manual element to begin with.