Read from multiple similar unknown databases - c#

I am new to coding in C#, most of my experience is in plain old C.
I am trying to code an application in C# that will look up information from one of many Access databases. Each database is similar in that they have the same tables (named appropriate to the database itself(PIC, PICenum, etc)). The same table in each database contains the same field names (LnetVar, Description, etc). Each database applies to a different data set, which is different from the other databases, which is why I have multiple databases instead of one all encompassing database. It also makes maintaining it much easier.
I currently have code that can access one specific database and pull out the data I need for use elsewhere in the code.
class DB_Handler
{
lsftTestDataSet.PICDataTable ds;
public DB_Handler(lsftTestDataSet.PICDataTable ds)
{
this.ds = ds;
}
public string GetDescription(byte lnetVar)
{
foreach (lsftTestDataSet.PICRow currentRow in ds)
{
if (currentRow.LnetVar == lnetVar)
{
return currentRow.Description;
}
}
return "";
}
}
I don't want to do the old Copy-Paste-Modify trick so that it can interact with each database. Rather I want a single function to call that I send the database I want to use as well as the relevant record and field information. From that it searches the database and returns the stored data.
I found a similar question here, but it was focused on optimizing, not on actually doing it. I was also unable to follow the code snippet given to modify it to use with my code.
Any help you can give me would be greatly appreciated!

As you said, what you need is a method/function receiving a parameter containing the connection string you want to use.

I would do something like this :-
In Web.config or App.Config
<connectionStrings>
<add name="DbConnnectionString" connectionString="server=myserver;database={MyDatabasePlaceHolder};Integrated Security=SSPI;"
</connectionStrings>
and then in code (also, need to add a reference of System.Configuration.dll)
using System.Configuration;
class DB_Handler
{
string connectionString=string.Empty;
public DB_Handler(string databaseName)
{
this.connectionString = ConfigurationManager.ConnectionStrings["DbConnnectionString"].ConnectionString.Replace ("{MyDatabasePlaceHolder}", databaseName)
}
public GetData()
{
// Make use of this.connectionString to fetch data
}
}

Related

Load an object from SQL Server but use different object properties for the c# class

I'm not sure if I'm being clear in the title but I'd like to "load" information from a SQL Server database into a list of objects. I'm new to c# and honestly haven't done any coding in a while.
Essentially the table would have columns: app_name, app_type, app_disposition and the object has properties: name, type, disposition. I've got what I want working using Dapper and simply making the object properties the same as the table columns.
Just curious if you could load but using different object property names.
With Dapper, simplest solution is to use aliases.
Your class is:
public class MyPoco
{
public string Name {get;set;}
//Declare other properties here
}
And, you fill this class as below:
string sql = "SELECT app_name as Name, [include other columns here]
FROM MyTable";
using (var conn = GetOpenConnection())
{
var myPocoList = conn.Query<MyPoco>(sql);
}
GetOpenConnection method above simply returns open connection depending on your RDBMS.
Please note that there are many other ways to map the miss-matching column and property names. Please refer this Q&A for more details.
Yes, you can, and you have to use the "Custom Mapping" feature. Here's a detailed article I wrote on the subject, along with code samples, to shows how you can do it.
https://medium.com/dapper-net/custom-columns-mapping-1cd45dfd51d6
Hint: Use Dapper.Fluent-Map plugin

Connect to SQL Server and SQL Server CE database in same code

Let me preface this by saying I'm kind of new to C# and database implementation. I'm trying to create some code that will connect to a database, but the type of database can vary between SQL Server, SQL Server CE, and maybe later on MySQL or something like that. I'm trying to implement the code using the DbConnection, DbCommand, DbDataAdapter, etc classes that is the base class of all these types of database classes. However, I am having difficulty trying to set this up.
I've tried using DatabaseFactory.CreateDatabase("connectionString") but it wants me to put connectionString the the app.config file, but I don't want the program to be set up that way.
I've also tried DbProviderFactories.GetFactory(providerStr) but in order to get the correct providerStr, I need to know which database I'm using.
Basically I want it so that no matter what type of database I am using, the code will still work. Is there any easy way to do this or is it possible? Any help is appreciated.
I've come to the conclusion that there is no way to do this without exactly specifying the type of database to connect to. So my answer was to add an enumeration of the varying types of databases and use DbProviderFactories.GetFactory(providerStr) to create a DbProviderFactory instance. Here's some of the code if anyone is interested or trying to do what I'm doing.
DbProviderFactory dbFactory;
private DatabaseTypes m_dbType;
public DatabaseTypes DbType
{
get { return m_dbType; }
set
{
m_dbType = value;
dbFactory = DbProviderFactories.GetFactory(DatabaseProvider.GetDatabaseProvider(m_dbType));
}
}
In another file I have the enumeration and DatabaseProvider class defined. It looks like this:
public enum DatabaseTypes
{
SQL,
SQL_CE
}
public static class DatabaseProvider
{
private const string SQL_Provider = "System.Data.SqlClient";
private const string SQL_CE_Provider = "System.Data.SqlServerCe.3.5";
public static string GetDatabaseProvider(DatabaseTypes dbType)
{
switch (dbType)
{
case DatabaseTypes.SQL_CE:
return SQL_CE_Provider;
case DatabaseTypes.SQL:
default:
return SQL_Provider;
}
}
}
Now if I want to add another database type, I'll just add it to the enumeration, get add another provider string, and put another case in the switch statement! All I need to do now is figure out how I am going to handle the queries.

What is the best practice to manage stored procedures in C# code?

I'm quite new here, so please forgive me if I made any deviation from the rules of this website.
I'm trying to find the best way possible to manage the names of a stored procedure in code.
Currently when I'm calling a stored procedure I'm using this code:
public static DataSet GetKeyTables()
{
DataSet ds = new DataSet();
ds = SqlDBHelper.ExecuteMultiSelectCommand("Sp_Get_Key_Tables",
CommandType.StoredProcedure);
return ds;
}
But I don't think that stating the name of the stored procedure in code is a wise idea, since it will be difficult to track.
I thought about Enum or app.config solutions, but I'm not sure these are the best ways.
Any idea will be highly appreciated.
You can have a class with constant properties having names of the SPs.And have this class in a seperate class library (dll). Also it is not good to have sp_ as start of procedure see the link http://msdn.microsoft.com/en-us/library/dd172115(v=vs.100).aspx
public class StoredProcedures
{
public const string GetKeyTables = "Sp_Get_Key_Tables";
}
In the end, it always boils down to the concrete name string of the SP, no matter what you do. You have to keep them in sync manually. - No way around it...
You could use configuration files for that, but that additional effort will only pay when the names change frequently or they need to remain changeable after compilation.
You can wrap the calls in a simple gateway class:
public static class StoredProcedures
{
public static DataSet GetKeyTables()
{
return SqlDBHelper.ExecuteMultiSelectCommand(
"Sp_Get_Key_Tables",
CommandType.StoredProcedure);
}
public static DataSet GetFoobars()
{
return SqlDBHelper.ExecuteMultiSelectCommand(
"Sp_Get_Foobars",
CommandType.StoredProcedure);
}
}
Alternatively you can have POCOs that know how to interact with the database:
public class KeyTable
{
public int Id { get; set; }
// whatever data you need
public static List<KeyTable> GetKeyTables
{
var ds = SqlDBHelper.ExecuteMultiSelectCommand(
"Sp_Get_Key_Tables",
CommandType.StoredProcedure);
foreach (var dr in ds.Tables[0].Rows)
{
// build the POCOs using the DataSet
}
}
}
The advantage of this is that not only the SP name is kept in a unique place, but also the logic of how to extract data out of the dataset is in the same place.
I don't see a huge issue with what you are doing. You would need to store the SP name somewhere, so either in the query or in another config or helper function.
Depending on the specification, I tend towards a repository for CRUD operations, so I know all data access, including any SP calls, are in the ISomeRepository implementation.
When I work with stored procedures in C# I follow the following rules.
Every stored procedure is used only once in the code so that I have
only one place to update. I do hate Magic strings and avoid
them but stored procedures are used in Data Access Layer only once (e.g. Repositories, ReadQueries, etc).
For CRUD operations the pattern "sp_{Create/Read/Update/Delete}{EntityName}" is used.
If you want to have single place with all your stored procedures, you can create a static class with logic to create stored procedure's names.

Can I temporarily store data in my C#.Net app to reduce the need for calls for data from SQL Server?

I created a C#.net app that uses dates from a SQL Server 2008 database table. Is there a way for me to temporarily store the data so that my program does not have to repeatedly make server calls for the same set of information? I know how to pull the info I need and create a temporary dataset, however, it is only accessible to the particular method or class and then goes away. I need the results to be universally available until the program closes.
This is what I have so far and I am not sure where to go next:
SqlConnection ReportConnect = new SqlConnection(ConnectionString);
String reportQuery = #"SELECT DISTINCT DATE FROM dbo.myTable ORDER BY DATE DESC";
ReportConnect.Open();
SqlCommand cmd = ReportConnect.CreateCommand();
cmd.CommandType = CommandType.Text;
cmd.Connection = ReportConnect;
cmd.CommandText = reportQuery.ToString();
SqlDataReader rdr = cmd.ExecuteReader();
while(rdr.Read()) {
//I can access the results here
}
//how do I add this data for the life of the program instance to my current
//dataset. Let's say the dataset is named "activeDataset"
If you are going to use key/value pair caching, I recommend you use HttpRuntime.Cache (available outside ASP.NET applications) since it already does alot of work for you.
In it's simplest implementation:
public IList<DateTime> GetUniqueDates()
{
const string CacheKey = "RepositoryName.UniqueDates";
Cache cache = HttpRuntime.Cache;
List<DateTime> result = cache.Get[CacheKey] as List<DateTime>;
if (result == null)
{
// If you're application has multithreaded access to data, you might want to
// put a double lock check in here
using (SqlConnection reportConnect = new SqlConnection(ConnectionString))
{
// ...
result = new List<DateTime>();
while(reader.Read())
{
result.Add((DateTime)reader["Value"]);
}
}
// You can specify various timeout options here
cache.Insert(CacheKey, result);
}
return result;
}
Having said that, I usually use IoC trickery to create a caching layer in front of my repository for the sake of cohesion.
You could create a singleton object, and store the data in this object.
Be aware that there is a lot more to single ton objects that you will have to think about.
Have a look at
Implementing the Singleton Pattern
in C#
Singleton pattern
Implementing Singleton in C#
You should use SQLCacheDependency. Take a look at MSDN
You could store the datatable in a static variable that would be accesible from any part of your code (is this necessary?).
public class MyDataSetCache
{
public static DataSet MyDataSet { get; set; }
}
Some other code...
// SQL Statements....
MyDataSetCache.MyDataSet = activeDataset // Editted to follow OP :-)
I usually serialize whole object to a file and try to read it first before going to database.
You can use a set of implementation hooks to achieve result:
Common data-application layer (data singleton or some data coupling using static class with lesser "visible" methods' dependencies)
Use caching -- you can use Dictionary and common string-keys to detect (ContainsKey method) whether data is already fetched or needs sql-server call. This can be useful when you need different DataSets. Dictionary works pretty fast.
You can definately use Cache to reduce database hits, Besides using SqlDependency you can have a cache based on time. You can invalidate your cache let's say every 4 hours,and hit the database again. Check out Cache.Insert()

looking for suggesions re: accessing data and populating List/arraylist/list

I am looking to do something like this:
access records in db with a datareader. I won't know how many records will come back - somewhere between 1 and 300 most of the time. The data will look something like this "0023","Eric","Harvest Circle", "Boston" for each record.
I wanted to immediately populate something (array, list?) and close the connection.
Pass the filled object back to my webservice where I will convert it to JSON before sending on to client.
The problem I am running into is what object/objects to populate with data from the datareader. I started out using an ArrayList (and have now read about using List instead), but although I could get back all the records (in this case just 2 Items), I could not access the individual fields inside each Item (Eric, Boston, etc).
Enter Plan B: foreach record in datareader, add the individual column values to an array and then add each array to the List. I think this will work, but I am not sure how to instantiate an array when I don't know how many would need to be instantiated. In other words, I would normally do this string[] myarray = new string[]{"eric", "boston", "etc"};
But if I have multiple records, what does that look like? Populate array, add to List, clear original array and then repopulate it, add it to List, etc?
Any help would be greatly appreciated! Also, I am very open to alternative ways of doing this.
Thanks!
The standard answer here, short of using a persistence layer like the Entity Framework, is to create "entity" classes with the fields you need. Something like:
class Customer
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
... and then collect these up in a collection of some kind. A List<Customer> is fine.
List<Customer> customers = new List<Customer>();
string connectionString = null;
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
SqlCommand command = new SqlCommand("GetData");
using (IDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
Customer c = new Customer();
c.Id = (int)reader["ID"];
c.FirstName = (string)reader["FirstName"];
c.LastName = (string)reader["LastName"];
customers.Add(c);
}
}
}
If you have a large number of tables, or a mutable list of fields, this can get hard to maintain. For simple tasks, though, this works fine.
Odds are you don't need a concrete class like a List or array. An IEnumerable is probably good enough. With that in mind, this is my favorite pattern:
public IEnumerable<IDataRecord> GetData()
{
using (var cn = getOpenConnection(connectionString))
using (var cmd = new SqlCommand("GetData", cn))
using (var rdr = cmd.ExecuteReader())
{
while (rdr.Read())
{
yield return (IDataRecord)rdr;
}
}
}
You could also modify that to create strongly-typed business objects, but I generally keep that in a separate layer/tier. Iterator blocks will do lazy evaluation/deferred execution and so the extra translation tier will not require an additional iteration of the results.
There is one trick with this code. You need to be sure you copy the data in each record to a real object at some point. You can't bind directly to it, or you'll get potentially-surprising results. One variation I use more often is to make the method generic and require a Func<IDataRecord, T> argument to translate each record as it is read.
Michael's answer is usually your best bet. Once things start getting complicated, you can add abstractions (you'll need object factories that take in sql readers and various other fun things) and start passing around objects and factories. I've found this strategy is pretty helpful, and a well-written datalayer class can be used throughout your database-related projects.
One alternative is to use an SqlDataAdapter and to convert your output to a DataSet. In my experience this is pretty unwieldy and generally a pretty bad idea. That said, if your tendency is to grab a chunk of data, manipulate it (update pieces, add new pieces, delete pieces), this can be a nice way to do it in memory with C# code. Think twice before trying this technique. This is basically equivalent to grabbing data and populating a List of rows, though a DataSet has a bit more stuff in it than an array (e.g. it includes column names).
I feel like using an SqlDataAdapter to populate a DataSet is the closest answer to giving you what you are asking for in your question but the worst answer in terms of giving you what you actually need.
Create an object that has properties that matches the fields that will be coming back in the datareader:
public class MyClass
{
public int Id{get;set;}
public string Name{get;set;}
....
}
Then, foreach over all the results in the datareader. Then, create an object of this type and set all the properties with the values in the columns for that records. Then, add them to a List<MyClass>.
I would recommend looking into LINQ to SQL (if you have a very simple database), or NHibernate (if you have a more complex database). These are OR mappers that can greatly simplify the process of turning a SQL query into objects that can be used in your domain, passed through services, etc.
I think a simple solution is in order here. I would create a DataAdapter and then fill a DataTable with the data. You can then close the connection, and the data table will remain populated, since, unlike ADO, ADO.NET defaults to disconnected recordsets.
You can then easily access the data by fieldname, e.g.,
myDataTable["FirstName"].ToString()
If all you are going to do is convert the data to JSON, creating an object that maps properties to the database fields seems unnecessary since ADO.NET gives you this out of the box.

Categories