Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What are the pros/cons of using c# code behind instead of javascript to change contents of an asp.net/html page? Specifically, I was wondering which would be better if I get a JSON object from a different server through a button click and then have to fill in that JSON object's contents into a table and then sort that table.
Also, a user could request multiple objects which would mean appending multiple tables to the same page without reloading the page. Would code behind allow this or is javascript the better option here?
Pros for WebMethods/WebServices/WebAPIs in providing content for a web page:
You have more resources at your disposal for processing requests, for example, you can access data stored in the filesystem or in a DB, process it and return it in a variety of formats such as XLS, CSV, JSON, Images, Strings, Binary blobs, etc.
You can handle your application's security in a better way since the code is not exposed and is not editable, also diminishing the posibilities for glitches and bugs.
You can leverage larger computing power than you would have on the average computer that's sending the request
Cons for WebMethods/WebServices/WebApis:
The response time will always be higher since requests are sent through the network and all the possible roadblocks apply: latency, network traffic, packet loss, server load, etc.
Large workloads require more complex logic for requests processing and it will consume more resources, resulting in higher costs for maintaining the application
The technologies used will usually involve more complexity than using just client side technologies (C#, ASP.NET, MVC, SQL, WCF, etc vs JavaScript, HTML and CSS)
Pros for client-side technologies:
They are lightweight (relatively) and possibly easier to learn and use than server side technologies
The response time can be faster than sending requests to a server provided the operation can be performed without using resources located remotely (for example, creating a chart and saving it as an image does not necessarily require you to send data to the server)
Lots of platforms can be targeted since these technologies are supported by the majority of browsers
For your specific case, DOM manipulation is faster when done in the client side, AJAX is the evidence of how much people hated postbacks and roundtrips to the server for trivial things.
Cons for client-side technologies:
Trying to process some operations that are better suited for a server-side operation results in convoluted and sometimes unpractical solutions due to unavailability of things like access to the filesystem and other local machine resources (HTML5 helps a lot in this with the addition of Local Storage, Local DB and other resources for manipulating binary strings as data but the adoption levels for HTML5 browsers are still not where everyone want them). Specific example: I once had to create a dashboard using only jQuery, HTML and CSS due to unavailability of server side resources, i was also required to render these charts as images to allow saving, and then allow the dashboard values to be exported to Excel, all of this while supporting all browsers back to Internet Explorer 8. Needless to say, the solution was a mix of multiple scripts, plugins and use of dreadful things like ActiveX objects that ultimately get the job done, but complicate your life beyond necessity. (There is of course the issue of practicity overlooked during application design but i had no say in this)
The execution can be slower than a request sent to a server for complex calculations that will perform different depending on the underlying hardware and available resources
Your code is exposed and open for examination and can also be edited on the fly. Debugging client side code is also not a very pleasant experience, though progress keeps being done on this front every day.
At the end of the day, i think there are no best/worst technology, just tools better suited for a specific type of job. If you need to work with objects and data available from the client side, JS + HTML + CSS is the way to go, but if you need to persist and manipulate data stored outside of the client machine or perform complex calculations that require lots of processing power, then server side technologies are better suited for this task.
Ideally I prefer ASP.NET Web API , meaning get all the data from server through JSON over RESTful services , once you have your data , your JS/Jquery should manipulate your HTML or DOM . Both work inclusive rather than exclusive , for your data, security you have to rely on server . client side data shouldn't be heavy and should not disclose any PII. See JqueryGrid for dynamic table manipulations , there are many more client rich controls .
In a single sentence: ASP.NET code behind (C#) is a server-side technology, while javascript (or jQuery) is a client-side technology. The first one is much more robust and client platform-independent, while a second one provides much better responsiveness (no need for round trip to web-server). For any business-critical data operations (especially if security is a major concern) server-side technology (i.e. ASP.NET/C#) is a preferred one.
Regarding your second Q in comments: In case you need to persist any data you can use either ViewState ot SessionState objects (also, objects/methods in global.asax file).
Also, valuable addition to ASP.NET is AJAX technology (Microsoft VS implements so-called UpdatePanel control, which greatly simplifies the implementation of AJAX pertinent to ASP.NEt pages), which improves the responsiveness of ASP.NET web pages.
Rgds,
Related
Simply I want to move development to .Net Core and Angular.
We have a legacy web site project written in C# ASP.Net application, functionally it meets the business needs, performance is good and it runs on the latest technologies. That said it is difficult to work with it, making changes takes time , it really needs an overhaul in the business layer , DAL needs to move to (EF) and want to move to a new GUI framework. These changes are all to improve quality, developer productivity, ultimately accelerate releases.
We have set of new features that need to be implemented, they are almost standalone "bolt-ons", ideal to develop as a separate site, but we need to use existing login details and session details, to make the user experience seamless. The goal is write new functionality in the "new world" and then re-write legacy pages in chunks.
This must be a common and obvious problem but research on how to achieve this have been fruitless.
The question really is how to dynamically share session data between sites.
It sounds like you're currently keeping your session in data in memory. The standard way to get around this is to move your session state out of memory and into something like SQL Server, MongoDB, Redis, etc so that both applications can access the shared state.
Update
Now we're getting to the crux of the problem. If you're depending on data that's stored in fields / hidden fields, you might already have a security problem (it's hard to say without knowing more about your implementation).
I'm guessing that the question you're trying to ask is, "How do I securely navigate from the legacy system to the new system (and vice-versa)". There are a couple of possible strategies here:
The first solution that comes to mind is an SSO (Single Sign On) approach. This is where you pass an opaque token from one system to the other that identifies the current user.
If you've able to serve the old/new applications from the same hostname you can use cookies to to store a user token of some sort.
Notes:
Don't pass something like a plain text user id being passed in the url.
The SSO token should be something completely separate from the user id (could be a guid that refers to a record in a shared database for example).
I've a big database which contains a lot of data from a big enterprise.
We would like to be able to dispatch this data to different external applications (external, meaning that are not developed by us, but only accessible in our local network).
Consumers can be of very different kinds: accounting, reporting, tech(business), website, ...
With a big variety of formats: CSV, webservice, RSS, Excel, ...
The execution of these exports can be of two different types: scheduled (like every hour), or on demand.
There is mostly two kind of exports: almost-real-time-data(meaning we want to have current data), or statistical data(meaning we are taking in account a period of time).
I've yet to find a good approach to allows those access.
I thought about Biztalk, but I don't know this product very well, and I'm not sure it can make scheduled calls and have business logic. Does anyone have enough knowledge of Biztalk to indicate to me if it can fit my needs?
If Biztalk isn't a good way, is there any libraries which can ease the development of a custom service?
Biztalk can be made to do what you want to do i.e. Extract data from your database, transform it into various formats and send it to various systems on a scheduled basis or as and when required by exposing this as a webservice/WCF Service (Not entirely out of the box, but you might need to purchase additional adapters, pipelines, etc).
But, the question here is, how database intensive is this task? If its large volumes of data, clearly Biztalk is not a favorite candidate, as Biztalk struggles with large data. Its good for routing (without transforming/inspecting) though, even if its large data files.
SSIS, on the other hand is good for data intensive tasks. If your existing databases are on SQL Server, then it fits even better for your data intensive exports/imports and transformations. But it falls short when it comes to the variety of ways you need to connect to external systems (protocols).
So, you are looking at a combination of a good ETL tool, like SSIS, as well as something good at routing like Biztalk. Neither of them clearly fit your needs on their own, in terms of scalability, volumes, connectivity, data formats, etc.
Your question can result in quite a broad implementation. You could consider using a service bus (pub/sub) along with some form of CQRS (if applicable).
My FOSS Shuttle ESB project is here: http://shuttle.codeplex.com/
It has a generic scheduler built in. You could, of course, go with any other service bus such as MassTransit, or NServiceBus.
I think you could use ASP.NET MVC API. http://www.asp.net/web-api
I find it the easiest way to export different kind of info and file formats.
It won't generate scheduled reports or files, you will need the client app or a windows service to call the app. Similar to webservices, but it can return different formats and also files.
And creating excel files, etc. you have to create them manually. Thats a bit of a turndown, but i like this approach because it can be easily hosted on IIS and all the functions your clients are going to call can be on the same place and even called from javascript, so as i see it is a bit more work for you, but it creates really easy to consume services.
By dispatch, I'm assuming you're looking for a pub/sub model. Take a hard look at NServiceBus's (NSB) pub/sub capabilities, http://nservicebus.com/docs/Samples/PublishSubscribe.aspx. Underneath the covers NSB makes heavy use of MSMQ, which has become a lot more stable over time.
If you want to venture outside of your .NET comfort zone, check out Apache Camel or Fuse's Enterprise Service Bus. Either of these tools will support what you need as well. I've used Camel in some extremely high throughput areas without any major issues.
We are looking into a better way to deliver data update notifications to a web front end.
These notifications trigger events that execute business logic and up-date elements via JavaScript (JS) to dynamically update the page without reloading.
Currently this is done with a server side thread, which timely fires an A-synch JS event to notify the web front-end(s) to check if the data has been changed or not.
This mechanism works, but the feeling within the team is that it could be a lot more efficient.
The tool is written in C# / ASP.NET combined with JS and we use the PokeIn library for the aSynch JS/C# Calls.
Any suggestions for improved functionality are welcome! Including radically different approaches still maintaining the JS/C#/ASP.NET usage.
Is this a real question? I would like to add this as a comment but I don't have the enough score.. Anyway, if you need what pokein does for you (object translation among the parties) that is the only option you have. Although there are solutions like websync, signalr.. They don't handle the object translation and has no different approach etc... Better, you benefit from pokein's websocket feature. Both of others needs Windows Server 8 for websocket. Pokein lets you use websocket on any server version or platform..
Sounds like SignalR would help you? This blog post gives a good introduction.
I was trying to solve something similar (reporting real-time updates triggered from an external services communicating with the server) recently and it turned out SignalR is a perfect fit for this situation.
Basically it is a library wrapping long-polling, Web Sockets and few other techniques, using (transparently) whatever is available on server and client.
I only have good experience with it so far.
I want to crawl through lets say other companies websites like for cars and extract readonly information in my local database. Then I want to be able to display this collected information on my website. Purely from technology perspective, is there a .net tool, program, etc already out there that is generic enough for my purpose. Or do I have to write it from scratch?
To do it effectively, I may need a WCF job that just mines data on constant basis and refreshes the database which then provides data to the website.
Also, is there a way to mask my calls to those websites? Would I create "traffic burden" for my target websites? Would it impact their functionality if I am just harmlessly crawling them?
How do I make my request look "human" instead of coming from Crawler?
Are there code examples out there on how to use a library that parses the DOM tree?
Can I send request to a specific site and get a response in terms of DOM with WebBrowser control?
Use HtmlAgilityPack to parse the HTML. Then use a Windows Service (not WCF) to run the long-running process.
I don't know about how you'd affect a target site, but one nifty way to generate human-looking traffic is the WinForms browser control. I've used it a couple of times to grab things from Wikipedia because my normal mode of using HttpWebRequest to perform HTTP get flagged a non-human filter there and I got blocked.
As far as affecting the target site it totally depends on the site. If you crawl stackoverflow enough times fast enough they'll ban your ip. If you do the same to google they'll start asking you to answer captchas. Most sites have rate limiters, so you can only ask for a request so often.
As far as scraping the data out of the page, never use regular expressions it's been said over and over. You should be using eaither a library that parses the DOM tree or roll your own if you want. In a previous startup of mine the way we approached the issue was we wrote an intermediary template language that would tell our scraper where the data was on the page so that we knew what data and what type of data we were extracting. The hard part you'll find is constantly changing and varying data. Once you have the parser working it takes constant work to have it keep working even on the same site.
I use a fantastically flexible tool Visual Web Ripper. Output to Excel, SQL, text. Input from the same.
There is no Generic tool which would extract the data from the Web for you. This is not a trivial operation. In general Crawling the pages is not that difficult. But stripping / extracting the content you need is difficult. This operation will have to be customized for every website.
We use professional tools dedicated for this and they are designed to feed the Crawler with instructions about which areas within the web page to extract the data you need.
I have also seen Perl Scripts designed extract data from Specific web pages. They could be highly effective depending on the site you parse.
If you hit a site too frequently, you will be banned (At least temporarily).
To mask your IP you can try http://proxify.com/
More of a design/conceptual question.
At work the decision was made to have our data access layer be called through webservices. So our website would call the webservices for any/all data to and from the database. Both the website & the webservices will be on the same machine(so no trip across the wire), but the database is on a separate machine(so that would require a trip across the wire regardless). This is all in-house, the website, webservice, and database are all within the same company(AFAIK, the webservices won't be reused by another other party).
To the best of my knowledge: the website will open a port to the webservices, and the webservices will in turn open another port and go across the wire to the database server to get/submit the data. The trip across the wire can't be avoided, but I'm concerned about the webservices standing in the middle.
I do agree there needs to be distinct layers between the functionality(such as business layer, data access layer, etc...), but this seems overly complex to me. I'm also sensing there will be some performance problems down the line.
Seems to me it would be better to have the (DAL)assemblies referenced directly within the solution, thus negating the first port to port connection.
Any thoughts(or links) both for and against this idea would be appreciated
P.S. We're a .NET shop(migrating from vb to C# 3.5)
Edit/Update
Marked Dathan as answer, I'm still not completely sold(I'm still kind of on the fence, though leaning it may not be as bad as I feared), he provided a well thought out answer. I appreciated all the feedback.
Both designs (app to web service to db; app to db via DAL) are pretty standard. Web services are often used when interfacing with clients to standardize the semantics of data access. The web service is usually able to more accurately represent the semantics of your data model than the underlying persistence store, and thus helps the maintainability of the system by abstracting and encapsulating IO-specific concerns. Web services also serve the additional purpose of providing a public interface (though "public" may still mean internal to your company) to your data via a protocol that's commonly accessible across firewalls. When using a DAL to connect directly to the DB, it's possible to encapsulate the data IO concerns in a similar way, but ultimately your client has to have direct access to the database. By restricting IO to well-defined semantics (usually CRUD+Query), you add an additional layer of security. This isn't such a big deal for you, since you're running a web app, though - all DB access is already done from trusted code. The web service does provide an increase in robustness against SQL injection, though.
All web service justifications aside, the real questions are:
How much will it be used? The website/web service/database format does impose slightly higher overhead on the web server - if the website gets hammered, you want to consider long and hard before putting another service on the same machine. Otherwise, the added small inefficiency is probably not a big deal. On the other hand, if the site is getting hammered, you probably want to scale horizontally anyway, and you should be able to scale the web service at the same time.
How much to you gain? One of the big reasons for having a web service is to provide data accessibility to client code - particularly when multiple possible application versions need to be supported. Since your web app is the only client to use the web service, this isn't a concern - it's probably actually less effort to version the app by itself.
Are you looking to expand? You say it probably won't ever be used by any client other than the single web app, but these things have a way of gaining in size. If there's any chance your web app might grow in scope or popularity, consider the web service. By designing around a web service, you're already targeting a modular, multi-host solution, so your app will probably scale with fewer growing pains.
In case you couldn't guess, I'm a web service fan. But the above are also my honest (if somewhat biased) opinions on the subject. If you do go the web service route, be sure to make it simple - keep application logic in the app and service logic in the service, and try to draw a bright line between them when extending the two. And do design your service for efficiency and configure the hosting to keep it running as smoothly as possible.
This is a questionable design, but your shop isn't the only one using it.
Since you're using .NET 3.5 and running on the same machine, you should use WCF with the netNamedPipesBinding, which uses binary data transfer over named pipes, only on the same machine. That should mitigate the performance issue somewhat.
I like the idea because it gives you flexibility. We use a very similar approach because we can have more than 1 type of database storing our data (MSSQL or Oracle) depending on our customer install choices.
It also gives customers the ability to hook into our database if they choose not to use our front end web site. As a result we get an open API for little to no extra effort.
If speed is your most critical issue than you have to lessen your layers. However in most cases the time it takes for your web Service to process the request from the database does not add allot of time. (This is assuming you do your Web Service Layer Correctly, you can easily make it slow if you don't watch it.)