I have created this interface for storing data in a file:
interface IFileStore {
bool Import(string);
string Export();
}
...and a generic class that implements that interface:
class DBTextFile<T> where T : IFileStore, new ()
{
public List<T> db = new List<T>();
public void SaveFile(string filename)
{
foreach(T d in db) File.WriteToFile(d.Export(), filename);
}
public void RestoreFile(string filename)
{
db = new List<T>();
string buffer;
while(buffer = File.Read(filename) != null)
{
T temp = new T();
if(temp.Import(buffer)) db.Add(temp);
}
}
}
This approach has been working for me for a while. However, I'm finding that I'm having to manage too many files. I think it might be more appropriate to use a database where each of my files would become a table in that database.
I would like to use a database that does not require the user to install any software on the user's system. (I want to keep things simple for the user.) Basically, I just want a database that is easy for me, the developer, to use and deploy.
Would MySQL be a good choice? Or would a different database be a better fit for my needs?
You can use different one file databases.
The one I'm using and am happy with is : SQLite.
You could also use access as Monika suggested, or browse google and see what else you can find ...
Related
I am trying to store emails into my SQL server database. These emails I got from Exchange Webservices.
I am using entity Framework and made a ADO .Net Data Model.
My Question is how do I make a method(StoreEmail) that stores these emails into my database.
This is my StoreEmail method that I got so far:
It should store my PhishingMails...
public object StoreMail(Model.PhishingMail PhishingMail)
{
using (var phishingMailStorage = new PhishFinderModel())
{
PhishingMail = MailMapper.Map(Model.PhishingMail);
phishingMailStorage.PhishingMails.Add();
phishingMailStorage.SaveChanges();
return PhishingMail;
}
}
In Mailmapper class I set the properties that I want to store, which are Sender, Subject and Body:
public static PhishingMail Map(EmailMessage OutlookMail)
{
PhishingMail readMail = new PhishingMail();
readMail.Sender = OutlookMail.Sender.Address;
readMail.Body = OutlookMail.Body;
return readMail;
}
This is my DB schema
To clarify my question, I already get the list of emails from the exchange server. Now, all I need to do is insert them into the SQL server.
How do I make my StoreEmail method work to do this?
Please don't be harsh I am really new to this. It feels like I am swimming in an ocean of information and I don't know where to look or start. So any suggested tutorials are very welcome.
Thanks!
You're storing PhishingMail, and you're receiving a PhishingMail, so you don't need your mapping step.
Does this not work?
public void StoreMail(Model.PhishingMail PhishingMail)
{
using (var phishingMailStorage = new PhishFinderModel())
{
phishingMailStorage.PhishingMails.Add(PhishingMail);
phishingMailStorage.SaveChanges();
}
}
You don't need to return the mail, either, since the caller already has it (and it's a lot tidier to have a void return if you're not returning a new/different object.
If you actually need to store an EmailMessage, your method should be:
public void StoreMail(EmailMessage emailMessage)
{
var phishingMail = MailMapper.Map(emailMessage);
using (var phishingMailStorage = new PhishFinderModel())
{
phishingMailStorage.PhishingMails.Add(phishingMail);
phishingMailStorage.SaveChanges();
}
}
I basically have created a class which when a user logs into a website it then queries the database and stores some settings in a List (So I have key/pair values).
The reason for this is because I want to always be able to access these settings without going to the database again.
I put these in a class and loop through the fields via a SQL query and add them to the list.
How can I then access these variables from another part of the application? or is there a better way to do this? I'm talking server side and not really client side.
Here is an example of what I had at the moment:
public static void createSystemMetaData()
{
string constring = ConfigurationManager.ConnectionStrings["Test"].ConnectionString;
SqlConnection sql = new SqlConnection(constring);
sql.Open();
SqlCommand systemMetaData = new SqlCommand("SELECT * FROM SD_TABLES", sql);
//Set Modules
using (SqlDataReader systemMetaDataReader = systemMetaData.ExecuteReader())
{
while (systemMetaDataReader.Read())
{
var name = systemMetaDataReader.GetOrdinal("Sequence").ToString();
var value = systemMetaDataReader.GetOrdinal("Property").ToString();
var Modules = new List<KeyValuePair<string, string>>();
Modules.Add(new KeyValuePair<string, string>(name, value));
}
}
}
Thanks
Any static properties of a class will be preserved for the lifetime of the application pool, assuming you're using ASP.NET under IIS.
So a very simple class might look like:
public static class MyConfigClass
{
public static Lazy<Something> MyConfig = new Lazy<Something>(() => GetSomethings());
public static Something GetSomethings()
{
// this will only be called once in your web application
}
}
You can then consume this by simply calling
MyConfigClass.MyConfig.Value
For less users you can go with the SessionState as Bob suggested,however with more users you might need to move to a state server or load it from Data Base each time.
As others have pointed out, the risk of holding these values in global memory is that the values might change. Also, global variables are a bad design decision as you can end up with various parts of your application reading and writing to these values, which makes debugging problems harder than it need be.
A commonly adopted solution is to wrap your database access inside a facade class. This class can then cache the values if you wish to avoid hitting the database for each request. In addition, as changes are routed through the facade too, it knows when the data has changed and can empty its cache (forcing a database re-read) when this occurs. As an added bonus, it becomes possible to mock the facade in order to test code without touching the database (database access is notoriously difficult to unit test).
From the looks of things you are using universal values irrespective of users so an SqlCacheDependency would be useful here:
Make sure you setup a database dependency in web.config for the name Test
public static class CacheData {
public static List<KeyValuePair<string,string>> GetData() {
var cache = System.Web.HttpContext.Current.Cache;
SqlCacheDependency SqlDep = null;
var modules = Cache["Modules"] as List<KeyValuePair<string,string>>;
if (modules == null) {
// Because of possible exceptions thrown when this
// code runs, use Try...Catch...Finally syntax.
try {
// Instantiate SqlDep using the SqlCacheDependency constructor.
SqlDep = new SqlCacheDependency("Test", "SD_TABLES");
}
// Handle the DatabaseNotEnabledForNotificationException with
// a call to the SqlCacheDependencyAdmin.EnableNotifications method.
catch (DatabaseNotEnabledForNotificationException exDBDis) {
SqlCacheDependencyAdmin.EnableNotifications("Test");
}
// Handle the TableNotEnabledForNotificationException with
// a call to the SqlCacheDependencyAdmin.EnableTableForNotifications method.
catch (TableNotEnabledForNotificationException exTabDis) {
SqlCacheDependencyAdmin.EnableTableForNotifications("Test", "SD_TABLES");
}
finally {
// Assign a value to modules here before calling the next line
Cache.Insert("Modules", modules, SqlDep);
}
}
return modules;
}
I'm working on a simple project more as an exercise in TDD than anything else. The program fetches some images from a web server and saves them as files. For the record, what I am doing (my desired end result) is very similar to this perl script but in C#.
I've got to the point where I need to save the files to disk. I need to make unit tests to mandate the code. I'm not sure how to approach this. I want to be able to verify that the code created the expected files with the expected file name(s) and of course I don't want to touch the file-system at all. I'm not completely new to unit testing and TDD but for some reason I'm really not clear what to do in this situation. I'm sure the answer will be obvious once I've seen it but.... the mysterious place in my brain where code comes from is just not cooperating.
My tools of choice are MSpec and FakeItEasy, but suggestions in any frameworks would be gratefully received. What are sensible approaches to unit testing file system interactions?
What would help here is Dependency Injection. Break up the monolithic download operation into smaller pieces and inject them into the downloader. Declare interfaces for these pieces:
public interface IImageFetcher
{
IEnumerable<Image> FetchImages(string address);
}
public interface IImagePersistor
{
void StoreImage(Image image, string path);
}
With these declarations you can write a downloader class that integrates the whole thing like this:
public class ImageDownloader
{
private IImageFetcher _imageFetcher;
private IImagePersistor _imagePersistor;
// Constructor injection of components
public ImageDownloader(IImageFetcher imageFetcher, IImagePersistor imagePersistor)
{
_imageFetcher = imageFetcher;
_imagePersistor = imagePersistor;
}
public void Download(string source, string destination)
{
var images = _imageFetcher.FetchImages(source);
int i = 1;
foreach (Image img in images) {
string path = Path.Combine(destination, "Image" + i.ToString("000"));
_imagePersistor.StoreImage(img, path);
i++;
}
}
}
Note that ImageDownloader does not know which implementations will be used and how they work.
Now, you can supply a dummy persistor when testing, that stores the filenames in a List<string> for instance, instead of supplying the real one that stores to the file system.
UPDATE
// For testing purposes only.
class DummyImagePersistor
{
public readonly List<string> Filenames = new List<string>();
public void StoreImage(Image image, string path)
{
Filenames.Add(path);
}
}
Testing:
var persistor = new DummyImagePersistor();
var sut = new ImageDownloader(new ImageFetcher(), persistor);
sut.Download("http://myimages.com/images", "C:\Destination");
Assert.AreEqual(10, persistor.Filenames.Count);
...
I've got an MVC3 project and one of the models is built as a separate class library project, for re-use in other applications.
I'm using mini-profiler and would like to find a way to profile the database connections and queries that are made from this class library and return the results to the MVC3 applciation.
Currently, in my MVC3 app, the existing models grab a connection using the following helper class:
public class SqlConnectionHelper
{
public static DbConnection GetConnection()
{
var dbconn = new SqlConnection(ConfigurationManager.ConnectionStrings["db"].ToString());
return new StackExchange.Profiling.Data.ProfiledDbConnection(dbconn, MiniProfiler.Current);
}
}
The external model can't call this function though, because it knows nothing of the MVC3 application, or of mini-profiler.
One way I thought of would be to have an IDbConnection Connection field on the external model and then pass in a ProfiledDbConnection object to this field before I call any of the model's methods. The model would then use whatever's in this field for database connections, and I should get some profiled results in the MVC3 frontend.
However, I'm not sure if this would work, or whether it's the best way of doing this. Is there a better way I'm missing?
ProfiledDbConnection isn't dapper: it is mini-profiler. We don't provide any magic that can take over all connection creation; the only thing I can suggest is to maybe expose an event in your library that can be subscribed externally - so the creation code in the library might look a bit like:
public static event SomeEventType ConnectionCreated;
static DbConnection CreateConnection() {
var conn = ExistingDbCreationCode();
var hadler = ConnectionCreated;
if(handler != null) {
var args = new SomeEventArgsType { Connection = conn };
handler(typeof(YourType), args);
conn = args.Connection;
}
return conn;
}
which could give external code the chance to do whatever they want, for example:
YourType.ConnectionCreated += (s,a) => {
a.Connection = new StackExchange.Profiling.Data.ProfiledDbConnection(
a.Connection, MiniProfiler.Current);
};
I'm building my first web app with .net, and it needs to interact with a very large existing database. I have the connection set up, and have made a class that I can call to build select, insert, update and delete queries passing in several parameters.
I can connect by writing the query I want in the button click, but I want to know is this the best solution? It seems hard to debug this way, as it is mixing the database code with other code.
In the past (in other languages) I have created a class which would contain all of the database query strings and parameters which would be called by the rest of the code. That way if something simple like the stored procedure parameters change, the code is all in one place.
When I look for this in .net, I see nothing about doing it this way and I'm keen to learn the best practices.
protected void Button1_Click(object sender, EventArgs e)
{
NameLabel.Text = UserNoTextBox.Text;
string spName = "SP_SelectUser";
SqlParameter[] parameters = new SqlParameter[]
{
new SqlParameter("#User_No", UserNoTextBox.Text)
};
DataAccess dbAccess = new DataAccess();
DataTable retVal = dbAccess.ExecuteParamerizedSelectCommand(spName, CommandType.StoredProcedure, parameters);
}
Update: The class I was referring to was the DataAccess class from the following website:
http://www.codeproject.com/Articles/361579/A-Beginners-Tutorial-for-Understanding-ADO-NET
(Class available at http://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=361579)
Update: In the end I opted for using MVC 3 with Entity Framework - it's great!
This is a huge topic, but a very brief view might be as follows:
DataTable must die (ok, it has a few uses, but in general: it must die); consider using a custom type such as:
public class User {
public int Id {get;set;}
public string Name {get;set;}
public string EmployeeNumber {get;set;}
// etc
}
It should also be noted that many ORM tools will generate these for you from the underlying table structure.
don't mix UI and data access; separate this code, ideally into separate classes, but at the minimum into separate methods:
protected void Button1_Click(object sender, EventArgs e)
{
NameLabel.Text = UserNoTextBox.Text;
var user = SomeType.GetUser(UserNoTextBox.Text);
// do something with user
}
...
public User GetUser(string userNumber) {
... your DB code here
}
use a library such as an ORM (EF, LINQ-to-SQL, LLBLGenPro) or a micro-ORM (dapper, PetaPoco, etc) - for example, here's that code with dapper:
public User GetUser(string userNumber) {
using(var conn = GetOpenConnection()) {
return conn.Query<User>("SP_SelectUser",
new {User_No = userNumber}, // <=== parameters made simple
commandType: CommandType.StoredProcedure).FirstOrDefault()
}
}
or with LINQ-to-SQL (EF is very similar):
public User GetUser(string userNumber) {
using(var db = GetDataContext()) {
return db.Users.FirstOrDefault(u => u.User_No == userNumber);
}
}
not everything needs to be a stored procedure; there used to be a huge performance difference between the two - but that is no longer the case. There are valid reasons to use them (very granular security, shared DB with multiple application consumers, a dba who thinks they are a developer), but they also create maintenance problems, especially when deploying changes. In most cases I would not hesitate to use raw (but parameterized) SQL, for example:
public User GetUser(string userNumber) {
using(var conn = GetOpenConnection()) {
return conn.Query<User>(#"
select [some columns here]
from Users where User_No = #userNumber",
new {userNumber}).FirstOrDefault()
}
}
I would do something like this:
code behind
protected void Button1_Click(object sender, EventArgs e)
{
UserController uc = new UserController();
User u = UserController.GetUser(Convert.ToInt32(UserNoTextBox.Text);
NameLabel.Text = u.UserName;
}
And in your UserController.cs
class UserController{
public User GetUser(int userId)
{
return DataAccess.GetUser(userId);
}
}
And in your User.cs
class User{
private string _userName;
public string UserName{ get{ return _userName;} set{ _userName= value;} }
}
And in your DataAccess.cs using Dapper
public User GetUser(int id)
{
var user = cnn.Query<User>("SP_SelectUser", new {User_No = id},
commandType: CommandType.StoredProcedure).First();
return user;
}
This is just one option, but you can also use different ORM's
It is about personal flavor. Here is a list of .Net ORM's
Good luck!
General best practices tend to be language agnostic in the OOP world. You say you have worked with data access in other OOP languages in the past, so would do exactly the same in .net. See response from #Oded for good links for general best practices.
If you are looking for guidance on how best to use .net's data access technology, try MSDN's articles on ADO as a starting point.
What you are doing will work, but doesn't follow good OOP principles, one of which is separation of concerns - your UI shouldn't be talking to the database directly.
You should have all your data access code placed in a separate layer which your UI layer can call.
Also see the SOLID principles and Don't repeat yourself.
When interacting with the database, many people use an ORM - Entity Framework, nHibernate, Dapper and many others exist for .NET applications. These act as a data access layer and you should investigate their usage for your application.