My team is developing a new application (C#, .Net 4) that involves a repository for shared users content. We need to decide where to store it. The requirements are as follows:
Share files among users.
Support versions.
Enable search by tags and support further queries such as "all the files created by people from group X"
Different views for different people (team X sees its own content and nobody else can see theirs).
I'm not sure what's best, so:
can I search over SVN using tags (not SVN tags of course, more like stackoverflow's tags)?
Is there any sense in thinking of duplication - both SVN and SQL - the content?
Any other suggestions?
Edit
The application enables users to write validation tests that they later execute. Those tests are shared among many groups on different sites. We need versioning for the regular reasons - undo changes, sudden deletions etc. This calls for SVN.
The thing is, we also want to add the option to find all the tests that are tagged "urgent" and were executed by now, for tracking purposes.
I hope I made myself more clear now :)
Edit II
I ran into SvnQuery and it looks good, but does it have an API I can use? I'd rather use their mechanism with my own GUI.
EDIT III
My colleague strongly supports using only a database and forget file based storage. He claims it is better for persistence (which is needed - a test is more than the list of commands to execute). I'd appreciate inputs on this issue, as I think it should be possible to do it this way or the other.
Thanks!
Firstly, consider using GIT rather than SVN. It's much faster, and I suspect it's more appropriate in your use-case: it's designed to be distributed, meaning your users will be able to use it without an internet access, and you won't have any overhead related to communicating with the server when saving documents.
Other than that, I'm not making full sense of your question but it seems like the gist of it might be better rephrased like so: "Can I do tag-based searches/access restriction onto my version control system, or do I need to create a layer on top to do so?"
If so, the answer is that you need a layer on top. Some exist already, both web-based (e.g., Trac) and desktop-based (e.g. GitX). They won't necessarily implement exactly what you need but they can be a good starting point to do what you're seeking.
You could use SVN.
Shared files: obvious and easy. It also supports the centralised locking that you might need for binary files.
Versions. Obviously.
Search... Now we're getting into difficult territory. There is a Lucene addon that allows web searching of your repo - opengrok, svnquery or svn-search. These would be your best starting points for that.
There is no way to stop people seeing what's present in a svn repo, but you can stop them from accessing it. I don't know if the access control could be extended easily to provide hidden folders, you could ask the svn developers.
There's some great APIs for working with SVN, probably the most accessible is SharpSVN which gives you a .net assembly, but there's Python and C and all sorts available.
As mentioned, there are web tools which sit on top of SVN to provide a view into it, there's Trac, and Redmine and several repo-viewers like webSVN, so there's plenty of sample code to use to cook up your own.
Would you use a DVCS like git or mercurial? I woulnd't. Though these have good mechanisms in themselves, it doesn't sound like they're what tyou're after. These allow people to work on their own and share with others on a peer-to-peer basis (though you can set a 'central' repo and work with that as everyone's peer). They do not work in a centralised, shared way. For example, if you and I both edit a test case locally andthen push to the central repo, we might have issues merging. We will have issues merging if the file is a binary or otherwise non-mergable file. In this case you have a problem with losing one person's changes. That's one, main reason for not using a DVCS in your case.
If you're trying to get shared tests together, have you looked at some apps that already do this. I noticed TestRail recently that sounds like what you're trying to do. It's not free (alas) but it's cheap.
Related
Consider a large multi-tier enterprise web application and many services with very complex functionality, mostly written in .NET (C#) on the server side and obviously html and javascript on the client, consisting of many hundred pages with the amount of service calls (actions) well in the thousands, hosted on multiple servers and developed over 15 years. Some parts are very new and modern, other parts are legacy.
Some parts of this application are obsolete and nobody actually uses those parts anymore. Whether these are whole unused sub-applications, unused pages, files, service calls, methods or even lines of code, doesn't matter. Older parts do not provide any usage statistics but do use dependency injection.
How can one automatically find out, based on access to production servers, which parts are unused, without changing the actual source code? So the question is not finding unreferenced / unreachable code. It's about finding parts that users don't actually use anymore.
One option could be looking at query logs. This discovers unused pages, but it is very difficult (a tedious manual process) to find out which parts in the background are used by those pages only.
Another option could probably be monitoring file access on servers. Does that make sense? Would that be feasible?
Yet another thought is doing something like test coverage tools do, but not during testing. Could coverage (something like lines of code executed) be measured in a live C#.NET application, assuming that debug symbols are available?
It is hard to give an answer without really knowing the situation. However, I do not think there is some automatic or easy way. I do not know the best solution, but I can tell you what I would do. I would start with collecting all log files from the (IIS?) server (for at least a year, code could be used once a year) and analyze those. This should give you the best insight on which parts are called externally. You do have those logs?
Also check the eventlogs. Sometimes there are messages like 'Directory does not exist', which could mean that the service isn't working for years but nobody noticed. And check for redundant applications, perhaps applications are active on multiple servers.
Check inside tables with time indications and loginfo for recent entries.
Checking the dates on files and analyzing the database may provide additional information, but I don't think it will really help.
Make a list of all applications that you think are obsolete, based on user input or applications that should be obsolete.
Use your findings to create a list based on the probability that application /code is obsolete. Next steps, based on your list, could be:
remove redundant applications.
look for changes in the datamodel of filesystem and check if these still match with the code.
analyze the database for invalid queries. This could indicate that the datamodel has changed, causing the application to stop working. If nobody noticed then this application or functionality is obsolete.
add logging to the code where you have doubts.
look at application level and start with marking calls as obsolete, comment / removing unused code or redirect to (new) equivalent code.
turn off applications and monitor what happens. If there is a dependency then you can take action to remove this dependency or choose to let the application live.
Monitoring the impact of your actions will help you to sort things out. I hope this answer gives you some ideas.
-- UPDATE --
There may be logging available, but collecting, reading and interpreting may be hard and time consuming. To make it easier to monitor you could think of the following:
monitor database: you can use the profiler tool, but it may be easier to create a trigger that logs all CRUD operations with all the information you need. Create a program that can read the scheme of the database and filter the log by table, stored procedure, view to determine what isn't used. I didn't investigate, but perhaps you can monitor rollbacks and exceptions as well.
monitor IIS. There are off course the log files, but you can also think of adding a Module to the website where you can write custom code to monitor whatever you want. All traffic passes the module. Take a look here: https://www.iis.net/learn/develop/runtime-extensibility/developing-iis-modules-and-handlers-with-the-net-framework. If I am not mistaken all you have to do is add the module to the website and configure the website to use the module. Create a program to filter the log on url, status, ip, identification, etc. to determine what is used.
I think that is sufficient for first analysis. It then comes to interpreting the logs. Perhaps you'll see a way to combine the logs so you can link a request to certain database actions, without having to look in or change the code. Just some thoughts.
You can use ReSharper. It will tell you such problems while you're coding.
However you can also detect problems afterwards. In the Menu you will find the entry "ReSharper > Inspect > Code Issues in Solution".
It will create a report, there you will find it under "Redundancies in Code".
I am making an webcrawler in C# which needs to find webshops. The problem i'm having is that I need to detect if the webpage is a webshop. If it is I need to find out what type of e-commerse software it is using. But the problem is that I don't know how you can detect it in the source code.
I also have just a Chrome plugin called builtwith which can detect all kinds of software. But I have yet to find out how they are doing that.
It would be nice if someone could help me with this problem
Before giving you an actual answer, it's worth noting that what you're proposing could be in violation of the terms of use for many websites out there. You should take the time to investigate what legal liability you might be exposing yourself and your organization to.
This is going to be a lot of time consuming work, but it's not difficult. Your crawler is just going to need to simply work using a rules-based approach to detect signatures in the payload of the page.
Find the specific software that you're intending to detect.
Find 2-3 sites that are definitely using the software.
Review the HTML payload to see what scripts, CSS, and HTML patterns they have that are common across the sites.
Build a code-based rule that can detect those patterns consistently. For example: if (html.Contains("widgetName")) isPlatformName = true;
Test that patterns across more sites that you know for certain are using that software.
Repeat for each software vendor.
The more complicated thing will be when the targets have multiple versions and you need to adapt your rules to know and be aware of the various versions, or when platforms are very similar.
I think the most complicated part of this is having a well-thought-out bot issue detection, reporting, and throttling architecture in place. You should probably spend the bulk of your time planning that.
That's it.
There are a couple different ways to determine the technologies a site is using. Firstly, if you are technically savvy, you can right click on an eCommerce page (either catalog, checkout page, etc) and look at the source code. Many platforms will have hints in the source code that will give you an idea what the site is running.
You can also look at the DNS/hosting information, which would help you determine if the eCommerce solution is hosted or SaaS (like Shopify, for example).
You can also try using InterNIC and enter the domain name. The results will return the nameservers which could point you in the right direction.
Finally, if all that sleuthing seems too difficult, there’s an easier way! Try BuiltWith. It’s generally pretty reliable, as long as the system you're looking up isn’t custom/proprietary. Enter a domain into BuiltWith and it will show you the platform, widgets used, analytics and tracking codes, CDNs, CMS, payment processors, and more.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
A little bit of Background first:
I have been using Team Foundation Server for about few months and know pretty much how to use it. I have been using it for my project on Codeplex. They required TFS and it was in my Visual Studio Installations, so basically I never knew what all it took to get it to work as it seamlessly worked inside Visual Studio and I just had to do Check In and Check Out stuff...
But now I wanted to see what other Alternatives were available and first installed Mercurial command line (which I never used), then searched for a GUI alternative and installed TortoiseHg and followed instruction from documentation on its Website. Then it said to install a 3 way Diff tool... I searched for it and then Found TortoiseSVN; I thought it must be some plugin or something so I searched SO for questions related to my situation when I stumbled upon this SO Question and was pretty mesmerized by so many tools for different work.
Now:
Can somebody explain what all tools are for source control. Do I have to install a different tool for every different task. Isn't there any single package for all of them. And basically what are the tasks we perform in Source Controlling. I only know Check In, Check Out and checking difference from Codeplex Website. What else should I know.
Does every website like Git, BitBucket, etc use different Tortoise (xxx) for their source control.
Are Source Control and Version Control different terms
Please help..
This is a huge topic and will be impossible to provide a single all-encompassing answer. Nonetheless here are a few thoughts, assuming you are looking for more of a Software Configuration Management solution rather than a simple Revision Control System type approach:
Release Management:
In addition to concurrency control (check-in, check-out, etc.) your SCM can/should also provide history, tagging, branching, and other release management type capabilities. That is, it should always provide a single source of truth as to what source files when into which release, service packs, etc. In order to do this, your build environment needs to be well integrated into your SCM.
WIP Management:
A good SCM system will allow to you compare your work-in-progress to the latest checked in revision. It should also let you revert your WIP, shelve it temporarily, or merge another's changes on a file by file basis.
Documentation & Training
Do not underestimate how important it is to use a tool that can give you a ton of help, books, documentation, community support, and even paid support if needed. Also selecting a "popular" tool can mean that some new developers have one less thing to learn.
Continuous Integration:
Automated builds are a must for any serious organization and you should pick an SCM that can be access by your build systems (e.g. Hudson, CruiseControl, Bamboo, etc.)
Security
The SCM system should have a built in authentication system and also be able to use outside authentication providers as many organizations change over time. In addition, it should be able to support developers working outside the firewall, preferrably over http.
IDE and Build Tool Integration
To make all this stuff easier your SCM must be able to be seamlessly linked into your development system and any command line tools you use. This fact is made easier by the fact that almost all non-Microsoft IDE's support all SCM tools.
Source Browsing
Most SCM tools that I've seen have a number of very high quality, third party browsers such as Fisheye. So I discount this as a differentiating factor.
So which tool to use?
If your organization is fairly well contained within your company then pick Subversion. It is very popular, integrates with every IDE/OS/Build tool, works with ToroiseSVN, supports all platforms, supports multiple protocols, several UI, a powerful command line, a huge community, is free, and is rock solid. It also has an excellent free book.
If you have a highly distributed development group and/or expect to receive open-source contributions from many different folks, go with the distributed capabilities of Git.
Beyond these two, save yourself a ton of time and hassle and forget everything else....really. I realize I am being opinionated, but you kinda asked for an opinion.
If I was to advise something to you, it would be
Use mercurial (aka hg), and start by
learning it in the command line. That
way you will learn all basic concepts,
which could be somewhat hidden from
you when using only GUI overlay such
as TortoiseHG. All with a good
simplistic tutorial of course, perhaps
widely known hginit which covers
some simple usage scenarios.
That would be answer to "What else should I know" part, at least for a start. You can then explore by yourself, having a limited, but somewhat solid base. Or, at least, you will be able to ask more concise questions to learn more, or make more sense of the SO question you quote. Your question is somewhat broader than this, of course, but I would advise not to try to grasp everything at once. Each system has it's own quirks and specialties, but you shouldn't be worried by that fact now. Just as with programming -- you should not try to learn many languages at once, if you don't know any yet.
Ah, and as a finishing touch: Tortoise(xxx) is not exactly a revision control system, thats just a typical name for a shell-integrated Windows client to system xxx. As far as I'm concerned, the "Tortoise" part refers to "shell".
PS. the "Mercurial" advice is due to my personal taste of course, but also by the feeling that learning Hg will enable you to grasp most of the ideas from other systems quite easy (if you ever need to).
From my personal experience I would recommend looking at the new generation of 'Source Control Systems' that are called Distributed Version Control Systems. These are systems like Git (and I think Mercurial but I haven't used that.) that actaully store a full version control system locally and when you commit to the remote repository (push in git terms) you push the changes in your local version control system to the master version control system on the server.
Also Git is designed to make Branching a breeze. In systems like Subversion branching is not as easy but with Git Branching is the recommended practice of making changes. I have used Git, Subversion (SVN) and SourceSafe(the worst Source control System of the three by far!) and this is the major advantage of Git over more traditional Source Control Systems.
For Example if you are fixing a bug or adding a feature in a code base that uses SVN the standard practice would be to
Check out the branch you are going to work in.
Make any bug fixes and test them.
Checkin the changes.
With Git or Similar systems you would
Branch the master branch locally (i.e. development, producton version 1.1, etc.).
Make any bug fixes and test in your locally branched version (i.e. you made a jira-123-bugfix branch for version 1.1).
Merge the branch back into your local copy of the master branch that you created it from and make sure everything is OK.
Then push the changes you made to your local copy of the master branch to the central Git repository.
The advantage of this is that if you have to go back and revist the bug fix you still have your local copy of that branch.
See articles like A Successful Git Branching Model for more info.
I need some help with my knowledge of .NET !
ive always created applications using C#(for fun), i also have knowledge in C, however i have some unanswered question and ive been searching for days!
once i create my form and my application is running fine, do i need to add a class of any sort to programeName.cs file? and why (i breifly know why but i need to understand exactly why, and when is it a must) ?
when im finished with the application design and my previous question is answered; what do i need to do to sell the application? do i just burn a CD with the .exe in it?? :S
i guess my question would be, what are the different componenets needed to produce and sell an application? is it really as simple as just creating a windows application form that works??
Cheers
Your question is really about two things:
Technical completion
Distribution
You don't need to do anything special, short of perhaps building in release mode rather than debug mode, to have a product ready to ship. You can do extra things, but don't need to. If you want to package it up in an installer application such as an MSI then you can, which makes distributing and installing easier for end users. Alternatively, depending on your audience, zipping it and telling users to xcopy it might be sufficient.
In terms of distribution - burning cds is out. Use the internet. If you're really solving a paint point for customers, and there is legit need for your app then customers will come. How do you get traffic to your site? Blog about it, (ideally in advance), find a few important blogs in the same vertical, with good readerships, and insert yourself into them. Add insightful comments, and post a topic expanding on it in your blog, and link to it. Build SEO, get inbound links etc.
If you application is running fine then you shouldn't need to add anymore code. Now, you might want to create an installer package. This is preferable to just have the plain executable as it will aid in creating Start menu shortcuts and so forth. However, if your app is very simple, you may just want an exe; it really depends but people usually create an installer. You can create a simple one within Visual Studio.
As far as selling, there are some other things like building a web site and using some sort of payment vendor. A lot depends on your specific needs and goals.
I have already designed an applications that is nothing more than a simple WinForm with one or two classes to handle data and collection.
Fairly often I find myself refactoring parts of it or adding new features to it, not huge features but small additions to its functionality.
The question I have is what would be the best way to provide an updated program to the user after they have initially downloaded it.
I have thought of a few different options already:
Upload a new version with improvements on CodePlex
Host the application on my personal website but change the file with the latest version
Implement some sort of system that will work in a way similar to add-ons to add the functionality.
Is there a way to provide an updated application without the user having to essentially replace their current version by deleting it and replacing it with a newly downloaded one? Although the CodePlex idea seems worthwhile I wasn't sure if there was a better or easier way.
Thank you for your time.
This is what ClickOnce was designed for.
I've used it regularly in a corporate setting,but it would also be appropriate for an Internet deployment scenario. You may want to invest in a certificate so you can sign your code if this is a commercial product.
Added
Here's a shorter article with a lot of screen shots.
http://www.15seconds.com/issue/041229.htm
(Still looking for more good links).
Added - final addition
Wikipedia sums it up succinctly.
http://en.wikipedia.org/wiki/ClickOnce