Context database and Sharepoint in C# console app - c#

In short:
I have a C# console app which i used to test and write data from SQL to SharePoint lists.
Everything works fine while it is ran as console app. I get the connection towards SQL, context is created, then I connect to SharePoint site and proceed with update of certain fields.
Now, when I deploy working solution as a timerjob (a .wsp) for sharepoint, and when I update it to the server farm and deploy it as feature to the site, and run it as a timerjob, it does work, but only, so to speak "once".
When I run that timer job, it recives SQL context, connects, and updates SharePoint lists. But when I change data in a SQL table (eg. a field called "price" from 10.99 to 11.99), and run timerjob again, it still only updates the "old" data, to be exact, the 10.99 value.
Now when doing this with console app .exe, on the server, no matter how many db changes I perform, it always updates the newest data, but as a timerJob it seems like it "hanges" onto previous context connection, and updates previous data only.
Do I need to specify, in the code, to drop context after the timerjob has ended it's run, so it can call the same but "fresh" context on the next run.
Here is inital code in the .wsp
class TimerJobPromoToolsDefinition : SPJobDefinition
{
public TimerJobPromoToolsDefinition() : base()
{
}
public TimerJobPromoToolsDefinition(string jobName, SPService service) : base(jobName, service, null, SPJobLockType.None)
{
this.Title = "PromoTools_Timer_Job";
}
public TimerJobPromoToolsDefinition(string jobName, SPWebApplication webapp) : base(jobName, webapp, null, SPJobLockType.ContentDatabase)
{
this.Title = "PromoTools_Timer_Job";
}
public override void Execute(Guid targetInstanceId)
{
System.Diagnostics.Trace.Assert(false);
Main();
}
private static TimerJobConnection _context;
public static void Main()
{
PrintInitializing();
_context = new TimerJobConnection();
string siteURL = "http://somesite.com/";
try
{
using (SPSite site = new SPSite(siteURL))
{
using (SPWeb web = site.OpenWeb())
{
var catalogs = GetArtikliFromPromoTools(web);
var articlesFiltered = GetArtikliFromDB(catalogs);
PrintFinishedLoadingArticles();
GetSharePointCatalogHeaders(web);
UpdateArticles(catalogs, articlesFiltered, web);
PrintEndOfOperation();
Console.WriteLine("Press any key to continue...");
}
}
}
catch (Exception)
{
PrintErrorSharepointConnection();
PrintPressKeyToExit();
}
finally
{
}
}

I think the "context" should not the issue based on my experience.
As you catched the exception, try to check is any exception, You could try to debug the code by attaching to owstimer process also.

Related

MVC: EF6 Connection Pooling and SQL Server CONTEXT_INFO

In an ASP.NET MVC application, I'm trying to use SQL Server's CONTEXT_INFO to pass the currently logged in user so my audit triggers record not only the web server login, but also the login of the site.
I'm having trouble being certain that the current user will always be fed into the database server context though.
On the backend I have everything set up, a sproc to set the context, a function to pull it and DML triggers to record, no problem.
The app end is a bit more involved. I subscribe to the Database.Connection.StateChange event so I can catch each newly opened connection and set this context accordingly.
Additionally, to be able to retrieve the current login ID of the MVC site in the data layer (which has no access to the web project), I supply a delegate to the EF constructor that will return the user ID. This also means that any other peripheral projects I have set up require this dependency as well, and it keeps most of the implementation detail out of my hair during the web dev:
public class CoreContext : DbContext
{
Func<int> _uidObtainer;
public CoreContext(Func<int> uidObtainer) : base(nameof(CoreContext)) { construct(uidObtainer); }
public CoreContext(Func<int> uidObtainer, string connection) : base(connection) { construct(uidObtainer); }
void construct(Func<int> uidObtainer) {
// disallow updates of the db from our models
Database.SetInitializer<CoreContext>(null);
// catch the connection change so we can update for our userID
_uidObtainer = uidObtainer;
Database.Connection.StateChange += connectionStateChanged;
}
private void connectionStateChanged(object sender, System.Data.StateChangeEventArgs e) {
// set our context info for logging
if (e.OriginalState == System.Data.ConnectionState.Open ||
e.CurrentState != System.Data.ConnectionState.Open) {
return;
}
int uid = _uidObtainer();
var conn = ((System.Data.Entity.Core.EntityClient.EntityConnection)sender).StoreConnection;
var cmd = conn.CreateCommand();
cmd.CommandText = "audit.SetContext";
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.Parameters.Add(new System.Data.SqlClient.SqlParameter("#DomainUserID", uid));
cmd.ExecuteNonQuery();
}
// etc etc...
In my MVC project, I'll have code that looks like this:
context = new Data.CoreContext(() => AppService.UserID());
(making use of a readily accessible method to pass as delegate, which in turn reads from HttpContext.Current.User)
This is all shaping up nicely, except one unknown:
I know that it's possible for a EF Context instance to span multiple logged in users as this lives as part of the IIS app pool and not per HttpContext
What I don't know is enough about connection pooling and how connections are opened/re-opened to be safe in knowing that for each time my StateChange handler runs, I'll actually be retrieving the new UserID from the delegate.
Said differently: is it possible for a single connection to be open and used over the span of two separate HttpContext instances? I believe yes, seeing as how there's nothing to enforce otherwise (at least not that I'm aware of).
What can I do to ensure that each connection is getting the current HttpContext?
(possibly pertinent notes: There's no UoW/Repository pattern outside of EF itself, and data contexts are generally instantiated once per controller)
I see: the one context per controller is generally incorrect. Instead I should be using one context per request, which (besides other advantages), ensures my scenario operates correctly as well.
I found this answer, which explains the reasoning behind it: One DbContext per web request... why?
And I found this answer, which explains quite succinctly how to implement via BeginRequest and EndRequest: One DbContext per request in ASP.NET MVC (without IOC container)
(code from second answer pasted below to prevent linkrot)
protected virtual void Application_BeginRequest()
{
HttpContext.Current.Items["_EntityContext"] = new EntityContext();
}
protected virtual void Application_EndRequest()
{
var entityContext = HttpContext.Current.Items["_EntityContext"] as EntityContext;
if (entityContext != null)
entityContext.Dispose();
}
And in your EntityContext class...
public class EntityContext
{
public static EntityContext Current
{
get { return HttpContext.Current.Items["_EntityContext"] as EntityContext; }
}
}

Check for WSUS server using Windows Update Agent API

I'm trying to determine whether or not WSUS manages the current machine by using the Windows Update Agent API. Currently, my code looks at all the services registered with Windows Update Agent.
private static bool IsManagedByWSUS(WUApiNativeMethods.IUpdateSession3 session)
{
WUApiNativeMethods.IUpdateServiceManager serviceManager = session.CreateUpdateServiceManager();
WUApiNativeMethods.IUpdateServiceCollection services = serviceManager.Services;
foreach (WUApiNativeMethods.IUpdateService service in services)
{
// Indicates whether the service is registered with automatic updates
var registeredWithAu = service.IsRegisteredWithAU();
// Indicates whether the service is a managed service
var isManaged = service.IsManaged();
var name = service.Name().ToLower();
if (registeredWithAu &&
isManaged &&
name.Contains("windows server update service"))
{
return true;
}
}
return false;
}
The problem is that I don't know whether or not checking for the name is reliable. I see that there is a service id field on the IUpdateService object that is a guid. I tested a couple of boxes, and it seems to always be 3da21691-e39d-4da6-8a4b-b43877bcb1b7.
How can I reliably check for WSUS?
you can check the registry file: HKLM\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate
shell command:
reg query "HKLM\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate"

create a timer job to run once a month which exports sharepoint list items to excel and stores in a document library

I want to create a timer job or workflow which runs once a month and exports sharepoint list data to excel and stores this file in a document library.
I have downloaded the code to create timer job from below link but dont know how to include the above requirement
http://code.msdn.microsoft.com/SharePoint-2010-Custom-416cd3a1
//Create class derived from SPJonDefinition Class
class ListTimerJob : SPJobDefinition
{
public ListTimerJob()
: base()
{
}
public ListTimerJob(string jobName, SPService service, SPServer server, SPJobLockType targetType)
: base(jobName, service, server, targetType)
{
}
public ListTimerJob(string jobName, SPWebApplication webApplication)
: base(jobName, webApplication, null, SPJobLockType.ContentDatabase)
{
this.Title = "List Timer Job";
}
public override void Execute(Guid contentDbId)
{
// get a reference to the current site collection's content database
SPWebApplication webApplication = this.Parent as SPWebApplication;
SPContentDatabase contentDb = webApplication.ContentDatabases[contentDbId];
// get a reference to the "ListTimerJob" list in the RootWeb of the first site collection in the content database
SPList Listjob = contentDb.Sites[0].RootWeb.Lists["ListTimerJob"];
// create a new list Item, set the Title to the current day/time, and update the item
SPListItem newList = Listjob.Items.Add();
newList["Title"] = DateTime.Now.ToString();
newList.Update();
}
}
//Add Event receiver at Feature Level
[Guid("9a724fdb-e423-4232-9626-0cffc53fb74b")]
public class Feature1EventReceiver : SPFeatureReceiver
{
const string List_JOB_NAME = "ListLogger";
// Uncomment the method below to handle the event raised after a feature has been activated.
public override void FeatureActivated(SPFeatureReceiverProperties properties)
{
SPSite site = properties.Feature.Parent as SPSite;
// make sure the job isn't already registered
foreach (SPJobDefinition job in site.WebApplication.JobDefinitions)
{
if (job.Name == List_JOB_NAME)
job.Delete();
}
// install the job
ListTimerJob listLoggerJob = new ListTimerJob(List_JOB_NAME, site.WebApplication);
SPMinuteSchedule schedule = new SPMinuteSchedule();
schedule.BeginSecond = 0;
schedule.EndSecond = 59;
schedule.Interval = 5;
listLoggerJob.Schedule = schedule;
listLoggerJob.Update();
}
// Uncomment the method below to handle the event raised before a feature is deactivated.
public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
{
SPSite site = properties.Feature.Parent as SPSite;
// delete the job
foreach (SPJobDefinition job in site.WebApplication.JobDefinitions)
{
if (job.Name == List_JOB_NAME)
job.Delete();
}
}
I would also advice you not to use SharePoint timer jobs engine.
It's definitely not stable.
Sometimes jobs simply don't trigger, they are difficult and slow to instantiate.
Of course you could always spend time tweaking SharePoint to achieve stability, but there is no guarantee. I know it sounds imperative, but trust me, I can't remember all the problems we had with this engine, but we lost much time on it.
I recommend you Quartz.NET or Windows Scheduler, as mentionned earlier.
These are well proven solutions, used by many people, also for SharePoint.
We implemented Quartz.Net for SharePoint at my company, all our Timer Jobs run on this engine.
We had no glitch for two years.
Best Regards.
You should change the SPMinuteSchedule to SPMonthlyByDaySchedule, see http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spschedule.aspx.
But, my recommendation is to use windows server scheduler and console application. Easy to change, easy to maintain (not iisreset!!) , and easy to log everything. We use console applications for various scheduled jobs, varying from 1 hour to 1 day.

Edit and delete from SQL database through Azure

I have been building a Windows Phone 8 app and Windows Azure cloud service that will allow people to store schedules in the cloud. I have implemented a single sign on system and a cloud service used to store the schedule items.
I have however run into yet another problem, as I am using a cloud service to communicate with the database, the commands are slightly different, for example, this is the code to add a record to the database:
public bool addMedication(string userid, string medname, DateTime medtime)
{
using (var meds = new TMP_Meds_Entities())
{
meds.med_schedule.Add(new med_schedule()
{
userid = userid,
medname = medname,
medtime = medtime
});
meds.SaveChanges();
return true;
}
}
I now need to implement methods to allow a user to edit or delete a particular record in the database, does anybody know how I might go about editing or deleting a record? As a note, I am using EntityFramework.
Thanks
This is more or less from scratch, so you'll need to adapt it to your scenario, but this should get you started...
Update:
public void UpdateMed(meds med)
{
if (ModelState.IsValid)
{
db.meds.Attach(med);
db.Entry(med).State = EntityState.Modified;
db.SaveChanges();
}
}
Delete:
public void DeleteMed(int medid)
{
meds med = db.meds.Find(medid);
db.meds.Remove(med);
db.SaveChanges();
}
Here are a couple good resources for more detail
http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/implementing-basic-crud-functionality-with-the-entity-framework-in-asp-net-mvc-application
http://www.dotnetcurry.com/showarticle.aspx?ID=619

Receiving stale data from NHibernate when going through a web service

I'm having trouble grasping some of the features of NHibernates caching / database-hit-prevention techniques.
I've created a test case which is supposed to ensure that our web service API properly creates and saves a new object. The test case passes fine when I do not have to serialize through the web service (e.g. directly working with the web service class instead of adding it as a service reference and going up/down through it). However, I receive stale data from NHibernate when I run my test case against the hosted web service.
[Test]
public void CreateInstallTask()
{
int numberOfTasks = TaskDao.GetAll().Count();
TaskDto taskDto = WorkflowServices.CreateInstallTask(OrderID, TaskTemplateID, SiteID, DataCenterID,
DeviceTemplateID, DeviceName, Username);
if (TaskDao.GetAll().Count() == numberOfTasks)
{
string failureReason =
string.Format("Failed to create new Install task with OrderID: {0}", taskDto.OrderID);
throw new Exception(failureReason);
}
}
[WebMethod(Description = "Creates a new install Task.")]
public TaskDto CreateInstallTask(int orderID, int taskTemplateID, int siteID, int dataCenterID,
int deviceTemplateID, string deviceName, string username)
{
try
{
Order order = OrderDao.GetByID(orderID, shouldLock: false);
if (order == null)
throw new Exception(string.Format("Failed to find an order with ID {0}", orderID));
Task task = new Task
{
Order = order,
TaskType = TaskType.Install,
TaskTemplateID = taskTemplateID,
CreateUserID = username,
CreateDateTime = DateTime.Now
};
TaskAction taskAction = new TaskAction(TaskDao, TaskDeviceDao, ActivityDao, task, username);
//Call TaskDto.Create to convert Task into TaskDto for client-side use.
return TaskDto.Create(taskAction.CreateTask());
}
catch (Exception exception)
{
Logger.Error(exception);
throw;
}
}
The GetAll() method is simply a criteria.List() for all rows in a table. The CreateTask method just calls ISession.SaveOrUpdate();
I understand that I have the ability to force reloading data, but I do not understand why I should have to do this.
When I call SaveOrUpdate(entity), that entity should automatically be added to NHibernate's cache, right? Why would TaskDao.GetAll() return stale data?
I am worried about overusing CommitTransaction(). I do not think that I should call CommitTransaction() after every SaveOrUpdate() -- that defeats the purpose of NHibernates caching. But, I do not want stale data for my test cases, either. How can I keep my cache in sync?
You are correct in that you should not commit your transaction after every save but your web service should be creating a new transaction at the start of the web call and committing at the end of the web call.
Web services typically follow the same session per request pattern that web sites typically follow so ensure that your web service infrastructure is creating both a new NHibernate ISession and starting a new transaction with each request. At the end of that request, it should be committing any changes made.

Categories