Organising webparts in visual studio 2010 - c#

Quick question. When creating web parts in vs2010, is it best to group all web parts into one project or create separate projects. I cant quite get my head round how best to organise this. The web parts will be part of a bigger Intranet solution but will be completely separate entities that will be developed and updated independently from each other over a period of time.

Quick question, not so quick answer. It indeed depends on the structure of your intranet and how you want to make it available.
Here are a few thinks that are important to consider.
Are a lot of the webparts going to share code (ex. I can imagine logging will be shared)? Ok you can put the
shared code in a separate project, but if you are going to reuse a
lot it can be handy to have everything in the same solution. If you are sure that you want them separate, you can go with the wsp for each webpart solution. But consider that you will also have a separate feature for each webpart.
Where are your webparts being used? If you have a few webapplications and only certain webparts are going to be used in a certain webapplication. Then you can consider putting those webparts in a separate package.
Hand in hand with the previous topic, do you always want to make every webpart available? If you work with one webapplication, with different sitecollections it is possible that you don't always want to so all the webparts. To solve this you can consider putting the webparts under different features and only activating the feature of the webparts that you want.
I am sure there are some more, but these are the ones that come to mind.

actually its depends on your requirement. but its fine if wrap all in the single as only webpart executes which we adds on the page. so its not going to affect any performance issue or anything else.

Related

Tracking All Code Usage In ASP.NET Web Application

I have been tasked with taking an existing asp.net website that has many lines of code and projects and redesigning it. I would like to know if anyone has any ideas on how to track every method/property that gets called when users are on the site. I would like to identify the code used the most so that I can know what I should carry over to the redesign, and what code is not used at all and can potentially be removed completely. Many thanks in advanced. There are both vb.net and c# projects in the solution, so any solution would have to support each language. Also, any free/oss solutions are the best for me right now.
I am currently using VS.NET 2015 Community, if that helps. :-)
You can do it in multiple ways:
Introduce some method level tracing by using some AOP framework (like PostSharp). In this case, you can log the method call chain of one specific request. Then you can start from there.
If your app backed with SQL Server, enable profiling and look through all SQL queries executed from one request. Then you look back to codebase and refactor them.
Use CodeMap to understand code base and do it.
You can use ReSharper. Right click the project and select 'Find Code Issues'. You will see 'Unused Symbols' to show which components are not being used.

Separating existing web project

I have inherited an existing .Net/angularJS project. We have a need moving forward to allow customization per client, while still maintaining synchronization through version control (in this case, git).
I'm not a .Net developer--my .Net experience is limited to writing a service a couple of years ago, starting the BrowserStack tests for the project, and the occasional foray for code review type activities. I'm primarily a Flash/Flex developer with a fair amount of ASP Classic and some PHP experience.
My preliminary research seems to indicate that I can do what I need to do with a git subtree, but I need to find where the seams should be to separate out the custom stuff from the shared code. Right now, the HTML and JS are in the same directory as the web services. My first order of business will be to separate those out, but I don't completely understand
Why everything's all in one place to begin with
What the implications are of moving things (project settings, paths, etc.)
When I wrote the service way back, I do remember that we had to scrap the service because the server we had the site on didn't support that version of .Net and it wouldn't work across domains so I could host the service on a server where it would work. I know that things have changed and there's now a way to allow that, but I figure that's the sort of problem I should be looking to avoid as I do this.
I figure I can't be the first person needing to make this kind of separation in a project I think started from the monolithic web project template, but because of a short deadline and a lack of knowledge of .Net, I'd feel better if someone could point me in the right direction or at least alert me to some of the gotchas I should plan to encounter.
Are you Trying to decouple the Projects. If so than this might be a good help.
http://www.codeproject.com/Articles/439688/Creating-ASP-NET-application-with-n-tier-architect
One of my recent project was almost the same that you mentioned above, So I ended up scrapping the old version and Create a brand new Project and Decoupled the related stuffs in the solution.
The best way of understanding stuff is to make sure you seperate the Client Side (Javascript/Htmls/CSS) and Server Side (EF/SP Calls/DTOs etc) by creating different project to the same solution.
Hope this Helps.
So I kept digging, and I finally found a pair of tutorials that address exactly this issue.
Creating an empty ASP.Net Project Powered by Angular JS Using Visual Studio
AngularJS Consuming ASP.NET Web API RESTful Services
In a nutshell, you copy the client's url from the properties panel to the service properties panel and add '/api' to the end of the URL and allow VS to create a virtual directory for you.
Now for my next trick, figuring out how to publish it...

How can a large MVC 4 application be divided to multiple smaller applications?

I have an "ASP.NET MVC 4 Web Application" divided in multiple areas in Visual Studio 2013.
The project grown large to the extent that I currently have a large number of areas in the application.
There one specific area that can be separated in another MVC 4 application by itself. I created a new MVC 4 application and moved this area to that application.
These are the problem I am facing:
The moved area uses code from other classes in the original app. Specifically, there is Shared area that contains some Attributes as well as a BaseController.
The views of the moved area also uses JS scripts and other content. Most of the time, the used scripts are not exclusive or specific to the area that uses them. So the content is cross referenced among many areas.
Here is what I tried:
To solve compilation errors, I tried to add reference from the old MVC app to the new MVC app so that the classes in the new app can see other classes in the old app such as classes in the shared area. However, this didn't seem to solve the problem.
To fix JS script and content issues, I created a solution folder and moved all the scripts to it. However, I can't verify whether that works or not since I can start the app due to compilation errors.
Here are my questions:
Does VS2013 provide some facility to separate MVC application or make an app refer to code and content from another app?
What do you think to be the best strategy to make such separation?
I tried searching SO and Google for how to accomplish such separation. However, what I found seems to be applicable to previous versions of MVC and VS. Since I am new the concept of MVC areas and MVC in general I can't get around this.
I would take a more general point of view on the problem. It's not specifically an ASP.NET MVC problem, but rather a .NET code organization issue. I don't believe there's any one specific canonical methodology, rather some general guidelines to help depending on your preferences and how your project/team works.
The overall items I would focus on is splitting up your code into separate assemblies and creating an asset pipeline.
Splitting assemblies allows your projects to compile and recompile independently, as needed. This has the added benefit if you need to split things across machines in web services and re-use in other related projects like WCF services, or if you need to do something like GAC specific assemblies depending on your requirements.
Putting things into a pipeline just makes sharing and managing everything easier.
Here are a few suggestions:
You don't need 1 monolithic project in your solution for everything which it sounds like what you have. Divide things up into multiple projects, resulting in multiple assemblies. You can also have multiple solutions for independent pieces of the app that don't need other pieces to run.
Start with any "back-end" and "business" logic that is shared throughout your application. For example if you have a CustomerService class that is used by multiple views, put it into something like MyCompany.MyApp.Services. Obviously, the more granular you get, the more specific your naming might get if you end up adding lots of projects.
For MVC type projects specifically, it's usually very easy to split up your models into 1 or more assemblies. Example: MyCompany.MyApp.SomeGroupingName1.Models and MyCompany.MyApp.SomeGroupingName2.Models, or simply into 1 as MyCompany.MyApp.Models
You can split your controllers into multiple assemblies as well, same as the previous point.
You can abstract any kind of components in the UI into their own assemblies as well, same goes for views.
Use solution folders as you mentioned.
Introduce an asset management library if needed to help build out and solve the bundling of resources. If the library doesn't already do this, you can create an asset pipeline that this plugs into to do minification of css and js. Then in your app you simply reference the bundles you need in the pages you need, rather than always including everything manually per page or sharing everything in the whole site in the whole app.
Prefer composition over inheritance when possible. It's not clear from your post, but I have seen many issues with people trying to implement a one rules them all base class for every controller, view, etc. It might work, as they say, until it doesn't. Still a good idea to use base classes and inheritance, but don't tie multiple things that do different things to the same base implementation. At that point, an interface might be more what you want if you need to enforce a contract of some kind.
A final point: If you bundle things in your content pipeline, the client minimizes the amount of times they download your scripts if you do it right, and you can have the browser try to cache it for other pages. Better sometimes to push a big file down once than make many network connections every page to different files that don't get cached easily or things that are inlined in your views.

SQL based storage vs SVN

My team is developing a new application (C#, .Net 4) that involves a repository for shared users content. We need to decide where to store it. The requirements are as follows:
Share files among users.
Support versions.
Enable search by tags and support further queries such as "all the files created by people from group X"
Different views for different people (team X sees its own content and nobody else can see theirs).
I'm not sure what's best, so:
can I search over SVN using tags (not SVN tags of course, more like stackoverflow's tags)?
Is there any sense in thinking of duplication - both SVN and SQL - the content?
Any other suggestions?
Edit
The application enables users to write validation tests that they later execute. Those tests are shared among many groups on different sites. We need versioning for the regular reasons - undo changes, sudden deletions etc. This calls for SVN.
The thing is, we also want to add the option to find all the tests that are tagged "urgent" and were executed by now, for tracking purposes.
I hope I made myself more clear now :)
Edit II
I ran into SvnQuery and it looks good, but does it have an API I can use? I'd rather use their mechanism with my own GUI.
EDIT III
My colleague strongly supports using only a database and forget file based storage. He claims it is better for persistence (which is needed - a test is more than the list of commands to execute). I'd appreciate inputs on this issue, as I think it should be possible to do it this way or the other.
Thanks!
Firstly, consider using GIT rather than SVN. It's much faster, and I suspect it's more appropriate in your use-case: it's designed to be distributed, meaning your users will be able to use it without an internet access, and you won't have any overhead related to communicating with the server when saving documents.
Other than that, I'm not making full sense of your question but it seems like the gist of it might be better rephrased like so: "Can I do tag-based searches/access restriction onto my version control system, or do I need to create a layer on top to do so?"
If so, the answer is that you need a layer on top. Some exist already, both web-based (e.g., Trac) and desktop-based (e.g. GitX). They won't necessarily implement exactly what you need but they can be a good starting point to do what you're seeking.
You could use SVN.
Shared files: obvious and easy. It also supports the centralised locking that you might need for binary files.
Versions. Obviously.
Search... Now we're getting into difficult territory. There is a Lucene addon that allows web searching of your repo - opengrok, svnquery or svn-search. These would be your best starting points for that.
There is no way to stop people seeing what's present in a svn repo, but you can stop them from accessing it. I don't know if the access control could be extended easily to provide hidden folders, you could ask the svn developers.
There's some great APIs for working with SVN, probably the most accessible is SharpSVN which gives you a .net assembly, but there's Python and C and all sorts available.
As mentioned, there are web tools which sit on top of SVN to provide a view into it, there's Trac, and Redmine and several repo-viewers like webSVN, so there's plenty of sample code to use to cook up your own.
Would you use a DVCS like git or mercurial? I woulnd't. Though these have good mechanisms in themselves, it doesn't sound like they're what tyou're after. These allow people to work on their own and share with others on a peer-to-peer basis (though you can set a 'central' repo and work with that as everyone's peer). They do not work in a centralised, shared way. For example, if you and I both edit a test case locally andthen push to the central repo, we might have issues merging. We will have issues merging if the file is a binary or otherwise non-mergable file. In this case you have a problem with losing one person's changes. That's one, main reason for not using a DVCS in your case.
If you're trying to get shared tests together, have you looked at some apps that already do this. I noticed TestRail recently that sounds like what you're trying to do. It's not free (alas) but it's cheap.

Design considerations when implementing/distributing updates for application

I have already designed an applications that is nothing more than a simple WinForm with one or two classes to handle data and collection.
Fairly often I find myself refactoring parts of it or adding new features to it, not huge features but small additions to its functionality.
The question I have is what would be the best way to provide an updated program to the user after they have initially downloaded it.
I have thought of a few different options already:
Upload a new version with improvements on CodePlex
Host the application on my personal website but change the file with the latest version
Implement some sort of system that will work in a way similar to add-ons to add the functionality.
Is there a way to provide an updated application without the user having to essentially replace their current version by deleting it and replacing it with a newly downloaded one? Although the CodePlex idea seems worthwhile I wasn't sure if there was a better or easier way.
Thank you for your time.
This is what ClickOnce was designed for.
I've used it regularly in a corporate setting,but it would also be appropriate for an Internet deployment scenario. You may want to invest in a certificate so you can sign your code if this is a commercial product.
Added
Here's a shorter article with a lot of screen shots.
http://www.15seconds.com/issue/041229.htm
(Still looking for more good links).
Added - final addition
Wikipedia sums it up succinctly.
http://en.wikipedia.org/wiki/ClickOnce

Categories