I'm working on a simple project more as an exercise in TDD than anything else. The program fetches some images from a web server and saves them as files. For the record, what I am doing (my desired end result) is very similar to this perl script but in C#.
I've got to the point where I need to save the files to disk. I need to make unit tests to mandate the code. I'm not sure how to approach this. I want to be able to verify that the code created the expected files with the expected file name(s) and of course I don't want to touch the file-system at all. I'm not completely new to unit testing and TDD but for some reason I'm really not clear what to do in this situation. I'm sure the answer will be obvious once I've seen it but.... the mysterious place in my brain where code comes from is just not cooperating.
My tools of choice are MSpec and FakeItEasy, but suggestions in any frameworks would be gratefully received. What are sensible approaches to unit testing file system interactions?
What would help here is Dependency Injection. Break up the monolithic download operation into smaller pieces and inject them into the downloader. Declare interfaces for these pieces:
public interface IImageFetcher
{
IEnumerable<Image> FetchImages(string address);
}
public interface IImagePersistor
{
void StoreImage(Image image, string path);
}
With these declarations you can write a downloader class that integrates the whole thing like this:
public class ImageDownloader
{
private IImageFetcher _imageFetcher;
private IImagePersistor _imagePersistor;
// Constructor injection of components
public ImageDownloader(IImageFetcher imageFetcher, IImagePersistor imagePersistor)
{
_imageFetcher = imageFetcher;
_imagePersistor = imagePersistor;
}
public void Download(string source, string destination)
{
var images = _imageFetcher.FetchImages(source);
int i = 1;
foreach (Image img in images) {
string path = Path.Combine(destination, "Image" + i.ToString("000"));
_imagePersistor.StoreImage(img, path);
i++;
}
}
}
Note that ImageDownloader does not know which implementations will be used and how they work.
Now, you can supply a dummy persistor when testing, that stores the filenames in a List<string> for instance, instead of supplying the real one that stores to the file system.
UPDATE
// For testing purposes only.
class DummyImagePersistor
{
public readonly List<string> Filenames = new List<string>();
public void StoreImage(Image image, string path)
{
Filenames.Add(path);
}
}
Testing:
var persistor = new DummyImagePersistor();
var sut = new ImageDownloader(new ImageFetcher(), persistor);
sut.Download("http://myimages.com/images", "C:\Destination");
Assert.AreEqual(10, persistor.Filenames.Count);
...
Related
I'm trying to add custom coloring for only certain keywords in my Visual Studio editor for C# code. I want to be able to color any type that implements IDisposable as a different color. Ideally I'd like to create a simple list of classes/interfaces that derive from IDisposable in some sort of configuration that I can edit. (Although if you said there was a method/plugin that would automatically find all disposable types and color them independently that would be the Holy Grail).
I've done a ton of research and it looks like an "editor classifier" extension might do the trick. However I created one that merely tries to color the word "Stream" and although it does hit my code that attempts to highlight that word, it does not end up highlighted in the editor.
I have added my VS extension to Github here
This really seems like this should be fairly straightforward but I have gone down many alleys on this one only to find dead-ends. Is there a simpler way to do this, or is my extension broken?
Update
Very strange. I just ran my extension again and although it does not highlight the text in the editor it highlights all instances of "Stream" in the popup text when you hover over a type/variable! Is there any way to get it to apply to the editor?
Depending on wether you are using Jetbrains Resharper or not you may write a plugin for that. That way you are able not only to add visual notification of IDisposable on a variable but also provide quickfixes if, and only if, it is not beeing called, which is what i am assuming you want to catch. Mind you that i can imagine that there's already a R# plugin for that. I know i've considered this too, but i was too lazy to write a plugin for that.
Don't get me wrong btw - If you're not using r# yet you should consider trying it out.
Among others you'd be working with this: API-QuickFix
There are also ways to define custom keywords, as resharper does, given by a custom markup and apply quickfixes to that.
PS: No i don't work at jetbrains. it's just that good :)
UPDATE:
potential VS Extension fix?
check this one out: MSDN Link Highlighting Text
I tried opening your github project but couldn't so i thought i'll just check msdn instead. it seems you are deriving from the wrong class to fulfill your needs?
MSDN keyword "Editors - Extending the Editor - Walkthrough: Highlighting Text"
I know SO wants code on the site, but msdn links going down is rather unlikely and with the given information the content can be found easily enough :)
I'm a bit late to the party, but hey, why not throw my 2 cents in.
As you've explained in your question, your project has two basic parts:
Finding the classes that implement IDisposable
Highlighting them
The first is by far the hardest, though not impossible. A word-list based approach is probably the simplest, though it should be possible with Roslyn to figure out on the fly which classes inherit IDisposible.
You could also always resort to loading the project's compiled .exe/.dll in the background after a build and figuring out what the types are there, but you'd still have to write some magic glue code to figure out what short class names in the code referred to what actual full-name classes in the assembly.
The second part, highlighting, is quite easy once you know how to do it (it helps that I've spent the last several months working full-time on extending VS). Of course, with Visual Studio, nothing is as simple as it looks (despite the efforts of Microsoft to try to make it user-friendly). So, I've built a sample extension that highlights just classes named "Stream" within C# files to get you started.
The relevant code is below, and the full project source is on GitHub). It starts with a classification-tagger provider:
[Export(typeof(ITaggerProvider))]
[ContentType("CSharp")]
[TagType(typeof(ClassificationTag))]
[Name("HighlightDisposableTagger")]
public class HighlightDisposableTaggerProvider : ITaggerProvider
{
[Import]
private IClassificationTypeRegistryService _classificationRegistry = null;
[Import]
private IClassifierAggregatorService _classifierAggregator = null;
private bool _reentrant;
public ITagger<T> CreateTagger<T>(ITextBuffer buffer) where T : ITag
{
if (_reentrant)
return null;
try {
_reentrant = true;
var classifier = _classifierAggregator.GetClassifier(buffer);
return new HighlightDisposableTagger(buffer, _classificationRegistry, classifier) as ITagger<T>;
}
finally {
_reentrant = false;
}
}
}
Then the tagger itself:
public class HighlightDisposableTagger : ITagger<ClassificationTag>
{
private const string DisposableFormatName = "HighlightDisposableFormat";
[Export]
[Name(DisposableFormatName)]
public static ClassificationTypeDefinition DisposableFormatType = null;
[Export(typeof(EditorFormatDefinition))]
[Name(DisposableFormatName)]
[ClassificationType(ClassificationTypeNames = DisposableFormatName)]
[UserVisible(true)]
public class DisposableFormatDefinition : ClassificationFormatDefinition
{
public DisposableFormatDefinition()
{
DisplayName = "Disposable Format";
ForegroundColor = Color.FromRgb(0xFF, 0x00, 0x00);
}
}
public event EventHandler<SnapshotSpanEventArgs> TagsChanged = delegate { };
private ITextBuffer _subjectBuffer;
private ClassificationTag _tag;
private IClassifier _classifier;
private bool _reentrant;
public HighlightDisposableTagger(ITextBuffer subjectBuffer, IClassificationTypeRegistryService typeService, IClassifier classifier)
{
_subjectBuffer = subjectBuffer;
var classificationType = typeService.GetClassificationType(DisposableFormatName);
_tag = new ClassificationTag(classificationType);
_classifier = classifier;
}
public IEnumerable<ITagSpan<ClassificationTag>> GetTags(NormalizedSnapshotSpanCollection spans)
{
if (_reentrant) {
return Enumerable.Empty<ITagSpan<ClassificationTag>>();
}
var tags = new List<ITagSpan<ClassificationTag>>();
try {
_reentrant = true;
foreach (var span in spans) {
if (span.IsEmpty)
continue;
foreach (var token in _classifier.GetClassificationSpans(span)) {
if (token.ClassificationType.IsOfType(/*PredefinedClassificationTypeNames.Identifier*/ "User Types")) {
// TODO: Somehow figure out if this refers to a class which implements IDisposable
if (token.Span.GetText() == "Stream") {
tags.Add(new TagSpan<ClassificationTag>(token.Span, _tag));
}
}
}
}
return tags;
}
finally {
_reentrant = false;
}
}
}
I've only tested this on VS2010, but it should work for VS2013 too (the only thing that might be different is the class classification name, but that's easy to discover with a well-placed breakpoint). I've never written an extension for VS2012, so I can't comment on that, but I know it's quite close to VS2013 in most respects.
So, one possible solution(I believe this one works):
1) Create your own content type which inherits from csharp.
2) Create new TextViewCreationListener which will swap out all "csharp" content types with your own one, thus potentially "disarming" all the other classifiers.
3) Register your classifier to handle your own content type.
Here is some of the code:
[Export(typeof(IVsTextViewCreationListener))]
[ContentType("csharp")]
[TextViewRole(PredefinedTextViewRoles.Editable)]
class TextViewCreationListener : IVsTextViewCreationListener {
internal readonly IVsEditorAdaptersFactoryService _adaptersFactory;
[Import] internal IContentTypeRegistryService ContentTypeRegistryService = null;
[ImportingConstructor]
public TextViewCreationListener(IVsEditorAdaptersFactoryService adaptersFactory) {
_adaptersFactory = adaptersFactory;
}
#region IVsTextViewCreationListener Members
public void VsTextViewCreated(VisualStudio.TextManager.Interop.IVsTextView textViewAdapter) {
var textView = _adaptersFactory.GetWpfTextView(textViewAdapter);
var myContent = ContentTypeRegistryService.GetContentType(MyContentType);
if(myContent == null)
{
ContentTypeRegistryService.AddContentType(MyContentType, new[] {"csharp"});
myContent = ContentTypeRegistryService.GetContentType(MyContentType);
}
// some kind of check if the content type is not already MyContentType.
textView.TextBuffer.ChangeContentType(myContent, null);
}
#endregion
}
And now, just modify your IClassifierProvider to register with your own content type, as such: [ContentType(MyContentType)]
Iin your own IClassifier, you can basically do your own calculation and once you think you can't handle the stuff, you could pass the control to other classifiers.
If you use MEF and import IClassifierAggregatorService, you can get a "MASTER-classifier" which will run all the logic for you. I haven't implemented it yet, but I've suggestes something similiar in the past, and it seemed to work. Alternatively you could maybe use [ImportMany] with List<IClassifier> and filter out the csharp ones?!
I have created this interface for storing data in a file:
interface IFileStore {
bool Import(string);
string Export();
}
...and a generic class that implements that interface:
class DBTextFile<T> where T : IFileStore, new ()
{
public List<T> db = new List<T>();
public void SaveFile(string filename)
{
foreach(T d in db) File.WriteToFile(d.Export(), filename);
}
public void RestoreFile(string filename)
{
db = new List<T>();
string buffer;
while(buffer = File.Read(filename) != null)
{
T temp = new T();
if(temp.Import(buffer)) db.Add(temp);
}
}
}
This approach has been working for me for a while. However, I'm finding that I'm having to manage too many files. I think it might be more appropriate to use a database where each of my files would become a table in that database.
I would like to use a database that does not require the user to install any software on the user's system. (I want to keep things simple for the user.) Basically, I just want a database that is easy for me, the developer, to use and deploy.
Would MySQL be a good choice? Or would a different database be a better fit for my needs?
You can use different one file databases.
The one I'm using and am happy with is : SQLite.
You could also use access as Monika suggested, or browse google and see what else you can find ...
I am practicing TDD using MsTest together with RhinoMocks, and I am trying to be as lazy as humanly possible, i.e. make use of VS2012 auto-generation wherever I can. But it doesn't always feel right to create an entire test method with the Arrange-Act-Assert methodology, just to set up my class and its constructors and properties.
Currently, I find it easiest to create some properties in my test class - even if I don't use them - solely for the purpose of code generation. My question is, is this a bad habit, and is there a better/easier way to do it? Any commentary, good or bad is welcome; Thank you!
[TestClass]
public class MainViewModelTest
{
private MainViewModel MainViewModel
{
get
{
var facilityDataEntity = MockRepository.GenerateStub<FacilityDataEntity>();
var viewModel = new MainViewModel(facilityDataEntity)
{
FacilityValue = string.Empty,
FacilityLabel = string.Empty
};
return viewModel;
}
}
private MainViewModel MainViewModelWithFacilityAndShopOrderData
{
get
{
var facilityDataEntity = MockRepository.GenerateStub<FacilityDataEntity>();
var shopOrderDataEntity = MockRepository.GenerateStub<ShopOrderDataEntity>();
var viewModel = new MainViewModel(facilityDataEntity, shopOrderDataEntity)
{
FacilityValue = string.Empty,
FacilityLabel = string.Empty,
ShopOrder = 99999999,
RequiredQuantity = 0M,
ItemCode = string.Empty,
ItemDescription = string.Empty
};
return viewModel;
}
}
[TestMethod]
public void MainViewModel_TranslateDataEntityListMethodReturnsMainViewModelRecords()
{
// Arrange
var facilityDataEntityList = MockRepository.GenerateStub<IEnumerable<FacilityDataEntity>>();
var shopOrderDataEntityList = MockRepository.GenerateStub<IEnumerable<ShopOrderDataEntity>>();
// Act
IEnumerable<MainViewModel> facilityResults = MainViewModel.TranslateDataEntityList(facilityDataEntityList);
IEnumerable<MainViewModel> shopOrderResults = MainViewModel.TranslateDataEntityList(facilityDataEntityList, shopOrderDataEntityList);
// Assert
Assert.IsInstanceOfType(facilityResults, typeof(IEnumerable<MainViewModel>));
Assert.IsInstanceOfType(shopOrderResults, typeof(IEnumerable<MainViewModel>));
}
}
It's not wrong to wrap up common code within your test classes, but I would avoid potentially sharing state between your tests.
There are two approaches you can use here.
Class/Test Initialization
As Peter mentions in his comments, it's easy enough to include initialization methods to do this sort of stuff for you.
//Only runs once per test run
[ClassInitialize]
public void InitClass(){
//Ideally this should be reserved for expensive operations
// or for setting properties that are static throughout
// the lifetime of your test.
}
//Runs for every test
[TestInitialize]
public void TestInit(){
//Here you can setup common stub/mock behavior
// that will be common for every test, but ensure
// it is clean for each test run
}
Setup/Factory Methods
Another option is to create specialized setup or factory methods that can be used to reduce repeated test code and make the intent of your test clearer.
[TestMethod]
public void ShouldFailIfUserNameIsTed(){
var user = SetupUserScenario("Ted");
var result = _myUserService.Validate(user);
Assert.IsFalse(result);
}
private User SetupUserScenario(String username){
var user = new User();
user.Name = username;
//Do a bunch of other necessary setup
return user;
}
Hopefully this all makes sense, but I would also caution you not to go too crazy with this. If you put too much stuff into setup methods, then your tests will be less clear. You should be able to read the test and figure out what is going on without having to inspect a bunch of other places in the code.
That's what the ClassInitialize functionality is for. I would choose expected and recommended means of doing something before anything else. It's more easily recognizable and takes less time to grok the code.
I know it's not so good to write tests after you actually wrote code. I'm unit-testing newbie and feel that unit-testing may deliver many good advantages so I obsessed with an idea to cover as much as possible.
For instance, let we have this code:
public class ProjectsPresenter : IProjectsViewObserver
{
private readonly IProjectsView _view;
private readonly IProjectsRepository _repository;
public ProjectsPresenter(IProjectsRepository repository, IProjectsView view)
{
_view = view;
_repository = repository;
Start();
}
public void Start()
{
_view.projects = _repository.FetchAll();
_view.AttachPresenter(this);
}
}
So looking on code above could you answer me what tests typically I should write on that piece of the code above?
I'm rolling on write tests on constructor to make sure that repository's FetchAll was called and on the view site AttachPresenter is called.
POST EDIT
Here is a my view interface:
public interface IProjectsView
{
List<Project> projects { set; }
Project project { set; }
void AttachPresenter(IProjectsViewObserver projectsPresenter);
}
Here is a view:
public partial class ProjectsForm : DockContent, IProjectsView
{
private IProjectsViewObserver _presenter;
public ProjectsForm()
{
InitializeComponent();
}
public Project project
{
set
{
listBoxProjects.SelectedItem = value;
}
}
public List<Project> projects
{
set
{
listBoxProjects.Items.Clear();
if ((value != null) && (value.Count() > 0))
listBoxProjects.Items.AddRange(value.ToArray());
}
}
public void AttachPresenter(IProjectsViewObserver projectsPresenter)
{
if (projectsPresenter == null)
throw new ArgumentNullException("projectsPresenter");
_presenter = projectsPresenter;
}
private void listBoxProjects_SelectedValueChanged(object sender, EventArgs e)
{
if (_presenter != null)
_presenter.SelectedProjectChanged((Project)listBoxProjects.SelectedItem);
}
}
POST EDIT #2
This is how I test interaction with repository. Is everything allright?
[Test]
public void ProjectsPresenter_RegularProjectsProcessing_ViewProjectsAreSetCorrectly()
{
// Arrange
MockRepository mocks = new MockRepository();
var view = mocks.StrictMock<IProjectsView>();
var repository = mocks.StrictMock<IProjectsRepository>();
List<Project> projList = new List<Project> {
new Project { ID = 1, Name = "test1", CreateTimestamp = DateTime.Now },
new Project { ID = 2, Name = "test2", CreateTimestamp = DateTime.Now }
};
Expect.Call(repository.FetchAll()).Return(projList);
Expect.Call(view.projects = projList);
Expect.Call(delegate { view.AttachPresenter(null); }).IgnoreArguments();
mocks.ReplayAll();
// Act
ProjectsPresenter presenter = new ProjectsPresenter(repository, view);
// Assert
mocks.VerifyAll();
}
I know it's not so good to write tests after you actually wrote code
It's better than not writing tests at all.
Your method works with two external components and that interaction should be verified (in addition to mentioned arguments validation). Checking whether FetchAll was called gives you no value (or checking it returns something - this belongs to ProjectsRepository tests itself) - you want to check that view's projects are set (which will indirectly check whether FetchAll was called). Tests you need are:
verify that view projects are set to expected value
verify that presenter is attached
validate input arguments
Edit: example of how you would test first case (projects are set)
// "RegularProcessing" in test name feels a bit forced;
// in such cases, you can simply skip 'conditions' part of test name
public void ProjectsPresenter_SetsViewProjectsCorrectly()
{
var view = MockRepository.GenerateMock<IProjectView>();
var repository = MockRepository.GenerateMock<IProjectsRepository>();
// Don't even need content;
// reference comparison will be enough
List<Project> projects = new List<Project>();
// We use repository in stub mode;
// it will simply provide data and that's all
repository.Stub(r => r.FetchAll()).Return(projects);
view.Expect(v => v.projects = projects);
ProjectsPresenter presenter = new ProjectsPresenter(repository, view);
view.VerifyAllExpecations();
}
In second case, you'll set expectations on view that its AttachPresenter is called with valid object:
public void ProjectsPresenter_AttachesPresenterToView()
{
// Arrange
var view = MockRepository.GenerateMock<IProjectView>();
view.Expect(v => v.AttachPresenter(Arg<IProjectsViewObserver>.Is.Anything));
var repository = MockRepository.GenerateMock<IProjectsRepository>();
// Act
var presenter = new ProjectsPresenter(repository, view);
// Assert
view.VerifyAllExpectations();
}
I would add simple tests at start , like:
null reference checks
FetchAll() returns any value
Do not add a lot of test code at first time, but refine them after as your dev code mutates.
I would add tests for exception e.g. ArgumentException, corner cases and usual ones of FetchAll().
PS. Does start have to be public?
Pex is an interesting tool that's worth checking out. It can generate suites of unit tests with high code coverage: http://research.microsoft.com/en-us/projects/pex/. It doesn't replace your own knowledge of your code - and which test scenarios are more important to you than others - but it's a nice addition to that.
The purpose of writing tests before writing production code is first and foremost to put you (the developer) in the mind-set of "how will i know when my code works?" When your development is focused on what the result of what working code will look like and not the code itself you are focused on the actual business value obtained by your code and not extraneous concerns (millions of man hours have been spent building and maintaining features users never asked for, wanted, or needed). When you do this you are doing "test driven development".
If you are doing pure TDD the answer is 100% code coverage. That is you do not write a single line of production code that is not already covered by a line of unit test code.
In Visual Studio if you go to Test->Analyze Code Coverage it will show you all of the lines of code that you do not have covered.
Practically speaking its not always feasible to enforce 100% code coverage. Also there are some lines of code that are much more important then others. Determining which depends once again on the business value provided by each line and the consequence of that line failing. Some lines (like logging) may have a smaller consequence then others.
Here's the scenario:
I have a method that reads in a file via a FileStream and a StreamReader in .NET. I would like to unit test this method and somehow remove the dependency on the StreamReader object.
Ideally I would like to be able to supply my own string of test data instead of using a real file. Right now the method makes use of the StreamReader.ReadLine method throughout. What is an approach to modifying the design I have now in order to make this test possible?
Depend on Stream and TextReader instead. Then your unit tests can use MemoryStream and StringReader. (Or load resources from inside your test assembly if necessary.)
Note how ReadLine is originally declared by TextReader, not StreamReader.
The simplest solution would be to have the method accept a Stream as a parameter instead of opening its own FileStream. Your actual code could pass in the FileStream as usual, while your test method could either use a different FileStream for test data, or a MemoryStream filled up with what you wanted to test (that wouldn't require a file).
Off the top of my head, I'd say this is a great opportunity to investigate the merits of Dependency Injection.
You might want to consider redesigning your method so that it takes a delegate that returns the file's contents. One delegate (the production one) might use the classes in System.IO, while the second one (for unit testing), returns the contents directly as a string.
I think the idea is to dependency inject the TextReader and mock it for unit testing. I think you can only mock the TextReader because it is an abstract class.
public class FileParser
{
private readonly TextReader _textReader;
public FileParser(TextReader reader)
{
_textReader = reader;
}
public List<TradeInfo> ProcessFile()
{
var rows = _textReader.ReadLine().Split(new[] { ',' }).Take(4);
return FeedMapper(rows.ToList());
}
private List<TradeInfo> FeedMapper(List<String> rows)
{
var row = rows.Take(4).ToList();
var trades = new List<TradeInfo>();
trades.Add(new TradeInfo { TradeId = row[0], FutureValue = Convert.ToInt32(row[1]), NotionalValue = Convert.ToInt32(row[3]), PresentValue = Convert.ToInt32(row[2]) });
return trades;
}
}
and then Mock using Rhino Mock
public class UnitTest1
{
[Test]
public void Test_Extract_First_Row_Mocked()
{
//Arrange
List<TradeInfo> listExpected = new List<TradeInfo>();
var tradeInfo = new TradeInfo() { TradeId = "0453", FutureValue = 2000000, PresentValue = 3000000, NotionalValue = 400000 };
listExpected.Add(tradeInfo);
var textReader = MockRepository.GenerateMock<TextReader>();
textReader.Expect(tr => tr.ReadLine()).Return("0453, 2000000, 3000000, 400000");
var fileParser = new FileParser(textReader);
var list = fileParser.ProcessFile();
listExpected.ShouldAllBeEquivalentTo(list);
}
}
BUT the question lies in the fact whether it is a good practice to pass such an object from the client code rather I feel it should be managed with using within the class responsible for processing. I agree with the idea of using a sep delegate for the actual code and one for the unit testing but again that is a bit of extra code in production. I may be a bit lost with the idea of dependency injection and mocking to even file IO open/read which actually is not a candidate for unit testing but the file processing logic is which can be tested by just passing the string content of the file (AAA23^YKL890^300000^TTRFGYUBARC).
Any ideas please! Thanks