Testing legacy ASP.NET Web Form [closed] - c#

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need help figuring out how to test old legacy asp.net web forms.
A lot of web pages in our project are written long time ago and now it is getting to a point where maintaining/adding extra features are a pain in the neck. There are no methods whatsoever. The codes are not modularized and server side code are all over the place at the front pages (.aspx) mixing together with the UI logic.
Rewriting these legacy asp.net web forms seems to be the only way to go for long-term benefits. However, here is our problem. These pages all work fine right now, but no one on our team completely understands the business logic behind them and reading through the code line by line will be very painful. We thought maybe writing some test cases and apply that same test on our newly re-factored, modern web forms and compare the results will be more promising and accurate.
Does anyone know how i can go about this? How to test legacy asp.net web forms if the codes are not organized and madularized? Any suggestion or recommendation will be helpful.
So far i have looked at Selenium but seems like this is more for UI testing than for business logic. My main focus will be what data gets pulled from the database and displayed on the form and what data gets written into database (especially which tables) after the submit.
Also looked at Visual Studio built in Test suite, but seems like this approach requires the code to be organized in methods and functions so I didn't continue my reading.
Another thought i have in mind is monitoring the database and see which tables get changed during the period when i manually open a web page and input/submit some data. Will this be a good option?
Any thought will be appreciated. Thanks!

These pages all work fine right now, but no one on our team completely understands the business logic behind them and reading through the code line by line will be very painful.
Ok, so this is really the crux of your problem. Do you have access to the stakeholders of this application (i.e whoever it was designed for?) They are probably the best people to explain to you how its supposed to work. You need to get access to these folks and have them give you at least a crash-course in the "domain" of the application.
Only when you and your colleagues fully understand how this system works can you test it. If you don't have access to the stakeholders, then don't panic - just take this thing one module at a time and start mapping it out.
You don't have to go all out and learn the whole thing up-front - take it one module or subsystem at a time, make plenty of diagrams about how the various parts of the business domain work from the perspective of the users, and do the same then for how they currently work. Put both diagrams side by side and start planning how you might refactor the code to be organised more like the business flow, and less like the existing flow.
This can be a tricky process for sure but actually once you get going, especially once you know how the system should ideally flow based on the previous point, it's not that bad - bear in mind that you will surely be able to copy/paste a lot of your existing code - in fact, you should probably avoid the temptation to try to fix bugs on the fly at this stage. Focus instead on the organisation of your classes, etc, so as to make then adhere to SOLID - any classes that broadly stick to this will typically be very testable.
Any bugs or really poorly written code can be flagged at this stage for fixing later on; a key point here is reaorganise not re-write!
Armed with that knowledge, the next step is to write a test specification for the various parts of the application, based on the new design of the modules. That means, lots of tests and test methods (using whatever framework you like, MSTest or xUnit, etc). You really can't avoid this but remember, one module at a time!
As DanielMann pointed out, it might be worth looking at something like Specflow that will let you write test specifications in a natural(ish) language form - you may even be able to get he stakeholders on board to help write the tests!
You don't have to have literally every detail specified at first; once you have identified the major "business units" in terms of logic, you can break them down into smaller and smaller chunks of conceptual behaviour
So you may end up with tests like (just an example)
LoginModule_WhenPasswordIsWrong_RedirectswithErrorMessage()
{
//write some code in here that exercises the LoginModule and assert that it behaves as expected
//The really important thing is to write these tests based on the NEW design
//and NOT the existing system.
Assert.Fail("Write the test!");
}
Now, the key thing here - most, if not all of these tests, will not even compile and even the few that do, will probably fail. That's actually a good thing! Because now you have a clear path of what you have to do - which is to make those tests pass by implementing the new design. Best to do this in a branch of the original!
So in the example above, you might not even have a clearly defined login "module" - the code might be scattered across several pages and classes. But by writing your "ideal" tests up-front, based on an ideal design, you now have a target to aim for. And also you don't have to be totally purist about it - there is no sin in bending the rules and making some tests less granular than the ideal case - you can come back and do that later.
Rinse and repeat - every system you do, is one less to do tomorrow!
Once your initial set of test methods is passing, you can then "zoom in" and start refining them, in the process fixing bugs and crappy code (the same thing, in many respects :) you came across earlier.
Best of luck with it!

Related

Cross-reference code and business requirements

I was curious if a tool exists to collect Metadata references in code to somehow link business requirements with sections of code?
I'm currently working on a legacy system that does not do anything like this, so I'm envisioning a Visual Studio extension that allows me to define Metadata tags. Once they are defined, I can add them to sections of code such that they are searchable.
So, for example, if I am working on the billing subsystem, perhaps I add the [Billing] tag so the next developer knows the specific part of the code that I used as an entry point into the code.
Is this a thing? Or could this even be leveraged to be useful? I just find that I am often lost for months learning a new system and always wished there was a way to search the code for business requirements. Or at least had a dictionary of search terms.
I think a problem is that business requirements may not map directly to any kind of module, but may be spread over the code-base. So where would you put your tags?
Requirements may also change, so you might have tags linking to requirements that may no longer accurately describe the current behavior.
I would propose to instead document such requirements thru tests. Preferably automated tests whenever possible, but in some cases manual tests might be appropriate. This should let you know whenever a requirement are no longer fulfilled, and that lets you either change the test, or the product. Such test can also be useful if you are new to the code base to gain some understanding of how the code is intended to work.
Having a good architecture, appropriate code comments, and some kind of project architecture documentation are other common tools to make familiarization easier, but it is fairly rare that all of these things exist and are up to date.
Linking source code to external systems can be somewhat risky, since code tend to outlive systems and people. I have worked with code bases that have gone thru at least 4 different source control systems, and three different issue trackers. And even if you think having such tags is the best thing since sliced bread, you successor might consider it unnecessary bloat.

Re-architecture a Compact Framework .NET 3.5 application

I've recently joined a company that is using a .NET Compact Framework 3.5 application that is supposed to be a typical 3-tier application (Client / UI, Business, Data). Unfortunately the application is in a sad state of affair - business logic on controls, no unit tests / mocking, etc.... We have a chance to try and change it but was wondering if anyone has any similar experience with how to tackle this? As this is a production system, we can't just steam-roller in and change it over night, so will need more of a phased approach.
Any recommendations or links to any best practices, please?
If it works and is tested by being used I wouldn't touch anything unless there is a change needed. If the need arises I would do the following:
Locate the part that needs change.
Refactor that part to make it testable.
Write enough tests to feel safe for current logic and get them to green.
Refactor into new design.
If current design is truly messy divide into smaller parts as needed.
Write tests for those new parts.
Then change tests and parts bit by bit to reflect new functionality.
Get tests to green.
In reality I would probably execute step 4 through 6 together to not have to write to many tests and logic for stuff I'm changing anyways, but it's all about how much you can keep track of in your head, and how safe you feel with your knowledge of how the part you are changing affects the other parts of the system.
If the changes affect huge parts of the app, it's going to be tricky to do without completely re-writing everything. On the other hand how can I successfully re-write that that which I don't understand? Basically it's a balance between the time to find out what it's supposed to do and then re-write the app versus the time it takes takes to understand the current app and add the changes.

Outside-in BDD (with Specflow)

I'm new to BDD, but I found it very interesting and want to develop my next project using BDD. After googling and watching screencasts I still have lots of questions about BDD in real life.
1. Declarative or Imperative scenarios?
Most of given-when-then scenarios I saw were written in terms of UI (imperative).
Scenario: Login
Given I am on the Login-page
When I enter 'AUser' in the textbox 'UserName'
And I enter 'APassword' in the textbox 'Password'
And I click the 'Login' button
Then I should see the following text 'You are logged in'
I found those tests extremely brittle and they tell nothing about business value of clicking on buttons. I think its nightmare to maintain. Why most of examples use imperative scenarios?
Scenario: Login (declarative)
Given I am not logged in
When I log in using valid credentials
Then I should be logged in
If you prefer declarative style, how do you describe such stuff like 'Home page' or 'Products page'?
Tips for writing good specifications
2. Exercise UI or not?
Most of steps implementations I saw used WatiN, White or something like that to implement scenarios from user point of view. Starting browser, clicking buttons. I think its extremely slow and brittle. Well, I can use something like Page object to make tests less brittle. But thats another amount of work. Especially for desktop applications with complex UI.
How do you implement scenarios in real-life projects - exercising UI, or via testing controllers/presenters?
Best way to apply BDD
3. Real database or not?
When Given part of scenario is implemented, often it needs some data to be in the system (e.g. some products for shop application). How do you implement that part - adding data to real database (full end-to-end testing), or providing repository stubs to controllers?
Waiting for experienced answers!
UPDATE: Added useful links on questions.
Declaritive is the proper way, IMO. If youre talking about page .aspx file names, you're doing it wrong. The purpose of the story is to facilitate communication between developers and non-develoeprs. Non-developers don't care about products.aspx, they care about a product listing. Your system does something the non-developers find value in. This is what you're trying to capture.
Well, the stories tell the high level features you need to implement. Its what your system must do. The only way to really tell if you've done this is to in fact exercise the UI. BDD SpecFlow stories to me don't replace unit tests, rather they're you're integration tests. If you break this, you've broken the value the business gets from your software. Unit tests are implementation details your users don't care about, and they only test each piece in isolation. That can't tell you if A and B actually work together all the time (in theory it should, in practice interesting [read: unexpected] things happen when you actually have the two parts playing with each other). Automated end to end tests can help with your QA as well. If a functional area breaks, you know about it, and they can spend their time in other areas of the application while you determine what broke the integration tests.
This is a tricky one. We've done a hybrid approach. We do use the database (integrate tests after all test the system functioning as one thing, rather than the individual components), but rather than resetting configurations all the time we use Deleporter to replace our repositories with Moqs when we need to. Seems to work ok, but there are certainly pros and cons either way. I think we're still largely figuring this out ourselves.
Edit: I found this article just now describing the concept of limiting yourself to talking only about specific domains to avoid brittle scenarios.
His main point is that the minimum number of domains you can talk about are the problem domain, and the solution domain. If you're talking about anything outside those two domains then you involve too many stakeholders, you introduce too much noise, and you make your scenarios brittle.
He also mentions an absolute "declarative" or "imperative" model being a myth. He talks about a cognative model called "chunking", saying that at any level of your abstraction, you can "chunk up" or "chunk down". This means you can get more explicit (how?) or more meta (what or why?). You chunk up from an imperative model by asking "what are we doing?" You chunk down by saying "how will we do this?" So I guess I wouldn't get too hung up on declarative vs imperative - it won't get you anywhere as far as this problem goes.
What will get you somewhere is figuring out which domains each term belongs in, possibly by identifying which stakeholder is the expert for the domain that term belongs in. Once you've identified all the domains, you can either pick related terms that are in one of the scenario's most prominent domains, or remove non-fitting statements entirely. If that isn't sufficient, you can split up, further specify, or move the scenario so it can satisfy these requirements.
BTW, he also uses the scenario of logging in on a UI, so you've got concrete guidance :)
Before Edit: (some of this still applies. The "DB or no DB" and "UI or no UI" questions are unrelated)
1 - Declarative or Imperative scenarios?
Declarative when you can, though imperative has some value (at some points in a project lifecycle).
Imperative is an easier way to think for testers and business analysts who aren't as familiar with information theory and design. It is also easier to think about earlier on in a project, before you've nailed down your problem domain and workflows. It can be useful for exploratory thinking.
Declarative is less subject to change over time. Since a GUI is the part of an application most subject to churn at a whim, this is extremely valuable. It is easier to think about once you've nailed down your problem domain and workflows, and are more focused on relational concepts. It is a more stable and more generally applicable model.
If you write test cases with a generic and declarative model, you could implement them using any combination of full app GUI automation, integration tests, or unit tests.
how do you describe such stuff like 'Home page' or 'Products page'?
I'm not sure I would at the base level of features and requirements. You might make sub-features and sub-requirements that describe implementation details, like specific UI workflows. If you're describing a piece of a UI, then you should be defining a UI feature/requirement.
2 - Exercise UI or not?
Both.
I think its extremely slow and brittle
Yes, it is. Perform every high level scenario/requirement with the UI and full DB integration, but don't exercise every single code path with end to end UI automation, and certainly not edge cases. If you do, you'll spend more time getting them working, and a lot less time actually testing your application.
You can architecture your application so you can do lower cost integration tests, including single-piece UI based tests. Unit tests are also valuable.
But the fewer integration tests you do, the more forehead-slapping bugs you're going to miss. It may be easier to write unit tests, and they will certainly be less brittle, but you'll be testing less of your application, by definition.
3 - Real database or not?
Both.
High level end-to-end integration tests must be done with the full system in place. This includes a real DB, running your tests with each system on a different server, etc.
The lower level you get, the more I advocate mock objects.
Unit tests should only test individual classes
Mid-level integration tests should avoid expensive, brittle, and impactful dependencies such as the file system, databases, the network, etc. Try to test the implementation of those brittle and impactful dependencies with unit tests and end-to-end tests only.
Instead of mentioning a page by name, describe what it represents, e.g.
Scenario: Customers logs in successfully
When I log in
Then I should see an overview of ACME's top selling products
You can test directly against underlying APIs or models, but the more you do this, the more you risk not catching an integration issue. One approach is to balance things with a small number of full-stack tests, and a larger number which test between two layers only.

We have migrated VB6 code to C# in .net

The code was migrated using a third party tool. what ever the tool couldnt do, was done by the .net developers, so that all compile issues were fixed. My question is, for such migration activities, do we not bother running unit tests for the functions.
Secondly, Could anyone suggest if we should use some tool in VSTS 10 to create a UML model of this code to minimize risks of issues that the client might find. How cumbersome is it.
Are there any other suggestions for how quality migrated code can be delivered, in light of the fact that the functionality of the original VB6 application is unknown to us.
for such migration activities, do we not bother running unit tests for the functions.
I wouldn't trust freshly translated code (mechanical or otherwise) at all. Absolutely it needs testing.
the functionality of the original VB6 application is unknown to us.
That will make regression testing quite... challenging. If you don't know how it is meant to behave, how do you know when you've finished it?
Of course, you could decide not to unit test the translated code, then you won't know how the new code works either - not sure that "unknown = unknown" counts as a "pass", though.
In my experience, the vast majority of applications provide a great deal of "unknown" functionality. After all the reason we write software is to help us manage information in ways that immeasurably exceed our abilities as mere morals. Over time, the size and complexity of our software grows, and grows, and grows until it contains a vast amount of "unknown" functionality. The unknown functionality was probably known and verified as "correct" at one time and it was captured in detail by the source code. However, as time passes no one fully remembers/knows what all the functionality is or even why it is "correct". The full functionality is only "remembered/known" by the source code, teams "test what they change" and the rest is assumed correct unless a problem shows up. This is particularly true of systems that have been extended and changed by many people over many years. Of course this creates risk, and we can do better, process like TDD and tools to automate unit testing are helping, but for many older systems lack of system understanding and incomplete testing are facts of life. The technical idealist in me does not like this, but the business realist in me accepts it.
All that said, this presents a major problem for migration teams. In theory these teams are "changing everything". In a VB6-to-.NET migration, "Test what we changed" means test it all. Ouch. Also the functional requirements for a migration often are "just make it do what it does now, but on the new platform." Not very useful when people do not know/remember everything the system does let alone how to verify that it does it correctly. I am working with several customers that have huge VB6 apps containing 100s of thousands of LOC organized into hundreds or forms and classes and several thousand methods, properties, and event handlers. I am sure these apps contain 10s of thousands of function points. I like to ask migration teams how long it would take them to find the error if I went into the VB6 and "broke" one little thing somewhere. I rarely get an answer...
This is why I advocate using a tool-assisted rewrite methodology. One of the most critical inputs to this process is the production-tested source code. We assume this code is "correct" since you or your customers are running their business on it. The source code is an extremely detailed, formal, and complete answer to the question: what does the system do? In our approach, the migration team iteratively customizes, calibrates, and verifies the automatic, systematic translation and re-engineering of the VB6 source to a complete .NET source. We translate, test, tune, and repeat; each time improving the quality of the translation in terms of functional correctness and conformance to .NET coding standards. Verifying and refining what the tool does is central to the methodology.
In order to verify code quality, we use code reviews and "side-by-side" testing. Code reviews are done by inspecting the .NET code using eyes, and other tools such as the .NET compiler, FXCop, NDepends, etc. We also do a lot of comparing successive generations of the translated codes using a product like BeyondCompare to verify that each translation tuning change has the desired effect and no undesired side-effect. Side-by-side testing is just what it sounds like: the general idea is to run the legacy and .NET apps in side-by-side test environments and make sure their results and behaviors match. There are at least a couple challenges here:
what do you do when you "run the app"; and
how do you make sure the results and behaviors match?
The first question is typically answered in terms of test data, use cases and automated unit tests; the second question is answered in terms of looking at the application UI, and the results (data, web pages, reports) from both systems and comparing (aka approval-based testing). Of course testing tools can go a long way to increase the efficiency. A large-scale migration is a very good time to have a discussion about starting to use testing tools.
If you are planning to migrate a large complex codebase, you need to plan to be very smart about testing. If done properly, the tool-assisted approach delivers production ready code very efficiently, and this will free up resources to produce QC artifacts and improve QC processes that will endure long after the migration.
Disclaimer: I work for Great Migrations.
From the tone of your question it sounds like you know the answer! I would say anything other than a complete set of regression tests would be a recipe for disaster! Ideally, you would want to run the same set of tests against both the old and new versions, although it sounds like you might not be able to do that...
My honest answer - make sure you've got plenty of support/maintenance developers ready to work round the clock fixing support issues!

How do you handle large projects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've just inherited a large project previously coded by about 4-5 people. The documentation consists of comments, and is not very well written. I have to get up to date on this project. How do I start? It consists of many different source files. Do you just dig in? Are there tools that can help visualize the structure/flow?
If you have a chance, I'd try and talk to the original designers and developers. Ask them about any major design issues or shortcomings of the project. Is the project in good shape and only needs maintenance or are there major components that need to be added or reworked? What are going to be the biggest roadblocks to maintaining the project? Take one or two of them to lunch (separately) if you have a budget for it as they might be more free to talk about problems outside of the office.
Talking to the users is also important for getting a feel for the current status of the project. Quite often they have a different opinion of how things stand then the developers do. Make sure, however, that they don't start giving you a list of all the things they want added or changed - you should take a few weeks to understand the project before you can start making major changes to it.
As for visualization tools, I'd start with the database design if there is a database involved. Tools like Microsoft Visio can create a diagram from an existing database. I find knowing the design of the database helps me wrap my head around what the programmers were trying to accomplish. Visio is also good for documenting program flow with some basic flowcharts though you'll have to create them yourself - it doesn't generate them automatically as far as I know.
Good luck.
I would encourage you to buy and read this book thoroughly. It provides you a LOT of information in this regard, much more than you will find here.
Brainstorming a little for you:
Step around in the application with a debugger, use a Static Code Analysis tool for which ever language you are working with...
Talk with people - both developers AND USERS to get a feel of the application.
Review the issue tracking system to see if you can see any recurring types of problem...
Are there tools that can help
visualize the structure/flow?
The latest Visual Studio 2010 allows you to generate architecture diagrams.
http://ajdotnet.wordpress.com/2009/03/29/visual-studio-2010-architecture-edition/
Try to find the starting point of the system and start digging from there. It sort of sucks to be in that situation, and chances are the comments might not be that helpful either. If the original developers didn't bother (or didn't have the chance) to document, chances are they never kept the comments up to date with code changes.
So time to bring the shovel... but don't just dig in blindly. One thing that is important is to understand what the system does from a users' perspective.
Concurrent with your code digging, you need to meet with a user (or the users' liason) and have him walk through the system, showing you how it is supposed to be used, for what purpose and what it and its subsystems are supposed to do. Moreover, attempt to understand what are the business pre-conditions and post-conditions of each major operation performed with this system.
Then map (or do a hierarchical) chart of the main functions of the system; classify them by category, purpose or module. If the system performs some sort of work flows or business transactions, attempt to chart some sort of state/transition diagram documenting each (and cross-referencing each state/transition to the subsystem or module in the system that is in charge for it.)
Once you have that, you can dig according to function. It will be best if you dig for a specific purpose, say, there is a bug fix to implement. You locate the logical module or category pertaining to that bug fix, you have the pre-conditions and post-conditions; then you can dig precisely on (or around) that bug fix.
If you just dig in without a guide (at least a high level one), you can be digging for months without getting anywhere (I'm telling you from painful experience.)
If there is no user manual, implement a draft according to your meetings with the users/users' liason. That could serve as a guide for implementing a developer's/administrator's manual for the system you just inherited (if there is ever a chance to implement one.)
If code is not on source control, put it on it. Doesn't matter what SCS you pick (could even be CVS, yuck!) What matters is to put it under source control asap.
Those developers didn't exist in a vacuum, they must have had exchanged emails. Identify other tech liasons they work with. Attempt to identify what other systems, if any, this system interfaces to (.ie. your databases, other's peoples databases, cron jobs, etc.)
But this could come at a later time. I think you should, for starters, focus on understanding how to use the system and what it is for. Let's call it understanding its business/knowledge architecture. Then dig according to that... or better yet, according to that and with the purpose of fixing a bug.
Good luck.
Use Profiler to see main functions and events in your project (the fastest way to learn framework)
Learn business logic very well to better understand the code
Documenting every new thing you learn - setup wiki (you will be surprised how quickly things are forgotten)
You can use Visio to draw Database Model Diagrams. (keep them close to you while studying the code)
These are the things that helped me when I inherited the previous project (50+ developers, 70+ GB database, 1 GB of source code and not even a single line of comments in code (maybe few :), and everything written in foreign language )
Use the debugger to walk through the application. That will let you go both deep and wide. You'll also be able to learn about how the code handles specific scenarios.
When you're ready to change something as #Jaxidian said, Working Effectively with Legacy Code is a great resource.
I was recently in a similar situation. What helped in my case was focusing on the changes I needed to perform on the project, and in the process of making those changes I learned about how the project is structured and so on. Sure, the first few tasks took a bit longer, but look on the bright side: I got stuff done and I got familiar with the project at the same time.
I'd suggest two things that may help:
Be productivity-driven. In other words, find a change that needs doing and use this to learn how that bit of the system works. Your changes may not be the most elegant without a whole-picture understanding of the software, but you will get work done within days/weeks.
Follow things from the user-interface. I.e if a change involves things a user does on a dialog, find that dialog in the code (relatively easy) and then work backwards to see what bits of the code provide data to the dialog, how the dialog interacts with the system, etc. Trying to find "where does X happen in the code" is very hard without good documentation, but finding "where is the code relating to this dialog" is quite easy and gives you an entry-point into the code.
Whenever I start a new project, I spend 2-3 days skim reading the code and making notes. I basically go through the entire solution from top to bottom and make a map in a text editor of each (significant) class in each project and what it appears to do.
The aim in doing this is not to completely understand the entire codebase, so don't worry if you feel you are not getting your head around it completely. The aim is that you end up with an index of where to go when you need to start on your first piece of work. You should also end up with a cursory picture of the solution in the back of your brain that will get filled in over the next couple of months. I always do this on the first few days as your superiors will not expect you to be productive during this time and you may never get another opportunity where you have the time to do so.
Also, do not rely on code comments for direction. Even with the best intentions they are often unmaintained and may lead to incorrect conclusions about what a class or section of code may do: a comment may lie but the code always tells the truth.
If you already have a team, you could charge each with a part of framework, and the result of their exploration should be registered somewhere, like a wiki. After that, give to each a task similar to something which is already done in the system (from the functional point of view)
For example: if a list of products is displayed in your app, you could display a list of orders (the complexity should be approximately the same), in the same manner it's done actually in the app. Than make it more interesting: try to edit it and save into DB.
Than switch the tasks and let the questions appear and than the first person who made the same task will show & explain how things are done.
Like that you'll see how the things are done pretty easy + your team will be up to date with this knowledge.
Presuming there is a database, start with the data model. Somewhere (Mythical Man-Month?) it was written "if I have your tables, I don't need to see your code."
Regarding potential tools, you may want to look into NDepend. It is a code-analysis tool, with an emphasis on highlighting the internal organization and dependencies of the code base (see this post for typical outputs), and spotting code quality issues. I have not used it personally, but Patrick Smacchia, one of the developers of the product, has a few posts where he applies NDepend to some classic apps (here is NUnit for instance) and discusses what it means, and I found them interesting.
Go and speak to the users or, read the manual and / or if one exists, go on a training course for the system (internal training departments will sometimes have put them together if there are lots of users).
If you don't know what it's meant to be doing then the chances of you being able to work out how it does it are close to zero.

Categories