First, I will outline my issue in case someone has an alternate fix.
The Problem:
I have a winform app that uses MergeReplication. This is working great except I needed to make changes to the columns and Primary Key on 5 Tables. I dropped them from the Articles and then made my changes. I then re-added them to the Articles and set the Publication to Reintilialize All.
Unfortunately, this does not work. When I go to run the Subscription Program it tells me that the Subscription is InValid.
EDIT 1
I have a correction/addition here. The actual errors I am getting in the Replication Monitor are such -->
Error messages:
The schema script 'tblCaseNotes_4.sch' could not be propagated to the subscriber. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201001)
Get help: http://help/MSSQL_REPL-2147201001
Could not drop object 'dbo.tblCaseNotes' because it is referenced by a FOREIGN KEY constraint. (Source: MSSQLServer, Error number: 3726)
Get help: http://help/3726
This seems important because it means that my MergeRepl sync process is trying to ReInitialize but cannot because of the below issue.
The way I was able to fix it on my machine was to use MSSSMS to DELETE the DB and then run my program which creates a db and syncs it. Unfortunately I do not have MSSSMS access to all the remote user SQL Express installs as for security reasons remote connections are off.
My Idea:
Create a small program that runs a .sql script to DELETE the DB on local machine. A la; DROP DATABASE MyDB This is only the test stage so no data preservation is needed.
Unfortunately I haven't the faintest idea how to have a program do that.
The Code:
This is the code that runs as my program loads. It takes care of creating the local db's and subscription if they aren't already there. It then checks to see if they need to be syncronized and kicks off a Pull Sync if needed. I include it because of the possibility that my solution is a change to this code.
I call this code like this -->
MergeRepl matrixMergeRepl = new MergeRepl(SystemInformation.ComputerName + "\\SQLEXPRESS","WWCSTAGE","MATRIX","MATRIX","MATRIX");
matrixMergeRepl.RunDataSync();
MergeRepl is below -->
public class MergeRepl
{
// Declare nessesary variables
private string subscriberName;
private string publisherName;
private string publicationName;
private string subscriptionDbName;
private string publicationDbName;
private MergePullSubscription mergeSubscription;
private MergePublication mergePublication;
private ServerConnection subscriberConn;
private ServerConnection publisherConn;
private Server theLocalSQLServer;
private ReplicationDatabase localRepDB;
public MergeRepl(string subscriber, string publisher, string publication, string subscriptionDB, string publicationDB)
{
subscriberName = subscriber;
publisherName = publisher;
publicationName = publication;
subscriptionDbName = subscriptionDB;
publicationDbName = publicationDB;
//Create connections to the Publisher and Subscriber.
subscriberConn = new ServerConnection(subscriberName);
publisherConn = new ServerConnection(publisherName);
// Define the pull mergeSubscription
mergeSubscription = new MergePullSubscription
{
ConnectionContext = subscriberConn,
DatabaseName = subscriptionDbName,
PublisherName = publisherName,
PublicationDBName = publicationDbName,
PublicationName = publicationName
};
// Ensure that the publication exists and that it supports pull subscriptions.
mergePublication = new MergePublication
{
Name = publicationName,
DatabaseName = publicationDbName,
ConnectionContext = publisherConn
};
// Create the local SQL Server instance
theLocalSQLServer = new Server(subscriberConn);
// Create a Replication DB Object to initiate Replication settings on local DB
localRepDB = new ReplicationDatabase(subscriptionDbName, subscriberConn);
// Check that the database exists locally
CreateDatabase(subscriptionDbName);
}
/// <exception cref="ApplicationException">There is insufficient metadata to synchronize the subscription.Recreate the subscription with the agent job or supply the required agent properties at run time.</exception>
public void RunDataSync()
{
// Keep program from appearing 'Not Responding'
///// Application.DoEvents();
// Does the needed Databases exist on local SQLExpress Install
/////CreateDatabase("ContactDB");
try
{
// Connect to the Subscriber
subscriberConn.Connect();
// if the Subscription exists, then start the sync
if (mergeSubscription.LoadProperties())
{
// Check that we have enough metadata to start the agent
if (mergeSubscription.PublisherSecurity != null || mergeSubscription.DistributorSecurity != null)
{
// Synchronously start the merge Agent for the mergeSubscription
// lblStatus.Text = "Data Sync Started - Please Be Patient!";
mergeSubscription.SynchronizationAgent.Synchronize();
}
else
{
throw new ApplicationException("There is insufficient metadata to synchronize the subscription." +
"Recreate the subscription with the agent job or supply the required agent properties at run time.");
}
}
else
{
// do something here if the pull mergeSubscription does not exist
// throw new ApplicationException(String.Format("A mergeSubscription to '{0}' does not exist on {1}", publicationName, subscriberName));
CreateMergeSubscription();
}
}
catch (Exception ex)
{
// Implement appropriaate error handling here
throw new ApplicationException("The subscription could not be synchronized. Verify that the subscription has been defined correctly.", ex);
//CreateMergeSubscription();
}
finally
{
subscriberConn.Disconnect();
}
}
/// <exception cref="ApplicationException"><c>ApplicationException</c>.</exception>
public void CreateMergeSubscription()
{
// Keep program from appearing 'Not Responding'
// Application.DoEvents();
try
{
if (mergePublication.LoadProperties())
{
if ((mergePublication.Attributes & PublicationAttributes.AllowPull) == 0)
{
mergePublication.Attributes |= PublicationAttributes.AllowPull;
}
// Make sure that the agent job for the mergeSubscription is created.
mergeSubscription.CreateSyncAgentByDefault = true;
// Create the pull mergeSubscription at the Subscriber.
mergeSubscription.Create();
Boolean registered = false;
// Verify that the mergeSubscription is not already registered.
foreach (MergeSubscription existing in mergePublication.EnumSubscriptions())
{
if (existing.SubscriberName == subscriberName
&& existing.SubscriptionDBName == subscriptionDbName
&& existing.SubscriptionType == SubscriptionOption.Pull)
{
registered = true;
}
}
if (!registered)
{
// Register the local mergeSubscription with the Publisher.
mergePublication.MakePullSubscriptionWellKnown(
subscriberName, subscriptionDbName,
SubscriptionSyncType.Automatic,
MergeSubscriberType.Local, 0);
}
}
else
{
// Do something here if the publication does not exist.
throw new ApplicationException(String.Format(
"The publication '{0}' does not exist on {1}.",
publicationName, publisherName));
}
}
catch (Exception ex)
{
// Implement the appropriate error handling here.
throw new ApplicationException(String.Format("The subscription to {0} could not be created.", publicationName), ex);
}
finally
{
publisherConn.Disconnect();
}
}
/// <summary>
/// This will make sure the needed DataBase exists locally before allowing any interaction with it.
/// </summary>
/// <param name="whichDataBase">The name of the DataBase to check for.</param>
/// <returns>True if the specified DataBase exists, False if it doesn't.</returns>
public void CreateDatabase(string whichDataBase)
{
Database db = LocalDBConn(whichDataBase, theLocalSQLServer, localRepDB);
if (!theLocalSQLServer.Databases.Contains(whichDataBase))
{
//Application.DoEvents();
// Create the database on the instance of SQL Server.
db = new Database(theLocalSQLServer, whichDataBase);
db.Create();
}
localRepDB.Load();
localRepDB.EnabledMergePublishing = false;
localRepDB.CommitPropertyChanges();
if (!mergeSubscription.LoadProperties())
{
CreateMergeSubscription();
}
}
private Database LocalDBConn(string databaseName, Server server, ReplicationDatabase replicationDatabase)
{
return server.Databases[replicationDatabase.Name];
}
/// <summary>
/// Checks for the existence of the Publication. If there is one it verifies Allow Pull is set
/// </summary>
/// <returns>True if Publication is present. False if not.</returns>
public bool CheckForPublication()
{
// If LoadProperties() returns TRUE then the Publication exists and is reachable
if (mergePublication.LoadProperties())
return true;
if ((mergePublication.Attributes & PublicationAttributes.AllowPull) == 0)
{
mergePublication.Attributes |= PublicationAttributes.AllowPull;
}
return false;
} // end CheckForPublication()
/// <summary>
/// Checks for the existence of a Subscription.
/// </summary>
/// <returns>True if a Subscription is present. False if not</returns>
public bool CheckForSubscription()
{
// Check for the existence of the Subscription
return mergeSubscription.IsExistingObject;
} // end CheckForSubscription()
}
The Guerdon (Reward):
This is extremely important to me so even if I am a flaming idiot and there is a super simple solution I will be adding a bounty to the correct answer.
EDIT 2
I created this to try and remove the Subscription first....which it does but still errors out on the DROP DB portion saying it is in use...
class Program
{
static void Main(string[] args)
{
DropSubscription();
DropDB();
}
private static void DropSubscription()
{
ServerConnection subscriberConn = new ServerConnection(".\\SQLEXPRESS");
MergePullSubscription mergePullSubscription = new MergePullSubscription("MATRIX","WWCSTAGE","MATRIX","MATRIX",subscriberConn);
mergePullSubscription.Remove();
}
private static void DropDB()
{
SqlCommand cmd;
string sql;
string dbName = "MATRIX";
SqlConnection sqlConnection = new SqlConnection("Server=.\\SQLEXPRESS;Initial Catalog="+ dbName + ";Integrated Security=True;User Instance=False");
sqlConnection.Open();
sql = "DROP DATABASE " + dbName;
cmd = new SqlCommand(sql,sqlConnection);
cmd.ExecuteNonQuery();
sqlConnection.Close();
}
}
If you're in testing phase (and I certainly don't recommend significant schema changes on a production system), then just drop the subscription and database on the subscriber machines and start over. If you can connect to them through SSMS then you can do it from there; or if you have physical access to them you can do it with SQLCMD.
I have code for dropping subscriptions and databases using SMO but it has to be run on the subscriber. Let me know if you think it would be helpful and I'll post it.
Edited to add: OK, the code is below. I don't have time right now to clean it up so it's raw. RaiseSyncManagerStatus is a method to display the status back to the UI because these methods are invoked asynchronously. Hope this helps -- bring on the guerdon. :-)
public void DropSubscription()
{
try
{
RaiseSyncManagerStatus(string.Format("Dropping subscription '{0}'.", _publicationName));
Server srv = new Server(_subscriberName);
MergePullSubscription sub = GetSubscription(srv.ConnectionContext);
// Remove if it exists
// Cannot remove from publisher because sysadmin or dbo roles are required
if (sub.LoadProperties() == true)
{
sub.Remove();
RaiseSyncManagerStatus("Subscription dropped.");
RaiseSyncManagerStatus("Removing subscription registration from the publisher.");
Server srvPub = new Server(_publisherName);
MergePublication pub = GetPublication(srvPub.ConnectionContext);
// Remove the subscription registration
pub.RemovePullSubscription(srv.Name, _subscriberDbName);
}
else
{
RaiseSyncManagerStatus("Failed to drop subscription; LoadProperties failed.");
}
}
catch (Exception ex)
{
RaiseSyncManagerStatus(ex);
throw;
}
}
public void DropSubscriberDb()
{
try
{
RaiseSyncManagerStatus(string.Format("Dropping subscriber database '{0}'.", _subscriberDbName));
if (SubscriptionValid())
{
throw new Exception("Subscription exists; cannot drop local database.");
}
Server srv = new Server(_subscriberName);
Database db = srv.Databases[_subscriberDbName];
if (db == null)
{
RaiseSyncManagerStatus("Subscriber database not found.");
}
else
{
RaiseSyncManagerStatus(string.Format("Subscriber database state: '{0}'.", db.State));
srv.KillDatabase(_subscriberDbName);
RaiseSyncManagerStatus("Subscriber database dropped.");
}
}
catch (Exception ex)
{
RaiseSyncManagerStatus(ex);
throw;
}
}
If I have understood your 'original' issue correctly, then you need to create a new Snapshot of the publication before it can be reinitialized. This is so that any structural changes you have made are applied to the subscribers.
See Adding Articles to and Dropping Articles from Existing Publications for more information and follow the specific steps for Merge replication.
Related
In my C# console application I am trying to update an account in CRM 2016. IsFaulted keeps returning true.
The error message it returns when I drill down is the following:
EntityState must be set to null, Created (for Create message) or Changed (for Update message).
Also in case it might cause the fault I have pasted my LINQ query at the bottom.
The answers I get from Google states either that I am mixing ServiceContext and ProxyService (which am not, I am not using it in this context). The others says that I am using context.UpdateObject(object) incorrectly, which I am not using either.
Update: Someone just informed me that the above error is caused because I am trying to return all the metadata and not just the updated data. Still I have no idea how to fix the error, but this information should be helpful.
private static void HandleUpdate(IOrganizationService crmService, List<Entity> updateEntities)
{
Console.WriteLine("Updating Entities: " + updateEntities.Count);
if (updateEntities.Count > 0)
{
try
{
var multipleRequest = new ExecuteMultipleRequest()
{
// Assign settings that define execution behavior: continue on error, return responses.
Settings = new ExecuteMultipleSettings()
{
ContinueOnError = true,
ReturnResponses = true
},
// Create an empty organization request collection.
Requests = new OrganizationRequestCollection()
};
foreach (var account in updateEntities)
{
multipleRequest.Requests.Add(
new UpdateRequest()
{
Target = account
});
}
ExecuteMultipleResponse response = (ExecuteMultipleResponse)crmService.Execute(multipleRequest);
if (response.IsFaulted)
{
int failedToUpdateAccount = 0;
foreach (ExecuteMultipleResponseItem singleResp in response.Responses)
{
if (singleResp.Fault != null)
{
string faultMessage = singleResp.Fault.Message;
var account = ((UpdateRequest)multipleRequest.Requests[singleResp.RequestIndex]).Target;
Log.Error($"Error update acc.id: {account.Id}.Error: {singleResp.Fault.Message}.");
failedToUpdateAccount++;
}
}
Log.Debug($"Failed to update {failedToUpdateAccount} accounts.");
}
else
{
Log.Debug("Execute multiple executed without errors");
}
}
catch (Exception ex)
{
Log.Error($"Error while executing Multiplerequest", ex);
}
}
}
// LINQ below
private static List<Account> GetAllActiveCRMAccounts(CRM2011DataContext CRMcontext)
{
Console.WriteLine("Start Getting CRMExistingAccounts ....");
List<Account> CRMExisterendeAccounts = new List<Account>();
try
{
CRMExisterendeAccounts = (from a in CRMcontext.AccountSet
where a.StateCode == AccountState.Active
where a.anotherVariable == 1
select new Account()
{
my_var1 = a.myVar1,
my_var2 = a.myVar2,
AccountId = a.AccountId,
anotherVar = a.alsoThisVar,
}).ToList();
}
catch (FaultException ex)
{
Log.Debug($"GetCRMExistingAccounts Exception { ex.Message}");
Console.WriteLine("GetCRMExistingAccounts Exception " + ex.Message);
throw new Exception(ex.Message);
}
return CRMExisterendeAccounts;
}
And yes, my variables has different names in my system.
The query returns the object just fine with all the correct data.
You can work around this in one of two ways:
1) Create your CRM2011DataContext with the MergeOption set to MergeOption.NoTracking. Entities loaded from a context that is not tracking will have a null EntityState property.
2) You can create a copy of your Entity and save the copy.
I am currently using the Change Notifications in Active Directory Domain Services in .NET as described in this blog. This will return all events that happen on an selected object (or in the subtree of that object). I now want to filter the list of events for creation and deletion (and maybe undeletion) events.
I would like to tell the ChangeNotifier class to only observe create-/delete-/undelete-events. The other solution is to receive all events and filter them on my side. I know that in case of the deletion of an object, the atribute list that is returned will contain the attribute isDeleted with the value True. But is there a way to see if the event represents the creation of an object? In my tests the value for usnchanged is always usncreated+1 in case of userobjects and both are equal for OUs, but can this be assured in high-frequency ADs? It is also possible to compare the changed and modified timestamp. And how can I tell if an object has been undeleted?
Just for the record, here is the main part of the code from the blog:
public class ChangeNotifier : IDisposable
{
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request
);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set; }
}
I participated in a design review about five years back on a project that started out using AD change notification. Very similar questions to yours were asked. I can share what I remember, and don't think things have change much since then. We ended up switching to DirSync.
It didn't seem possible to get just creates & deletes from AD change notifications. We found change notification resulted enough events monitoring a large directory that notification processing could bottleneck and fall behind. This API is not designed for scale, but as I recall the performance/latency were not the primary reason we switched.
Yes, the usn relationship for new objects generally holds, although I think there are multi-dc scenarios where you can get usncreated == usnchanged for a new user, but we didn't test that extensively, because...
The important thing for us was that change notification only gives you reliable object creation detection under the unrealistic assumption that your machine is up 100% of the time! In production systems there are always some case where you need to reboot and catch up or re-synchronize, and we switched to DirSync because it has a robust way to handle those scenarios.
In our case it could block email to a new user for an indeterminate time if an object create were missed. That obviously wouldn't be good, we needed to be sure. For AD change notifications, getting that resync right that would have some more work and hard to test. But for DirSync, its more natural, and there's a fast-path resume mechanism that usually avoids resync. For safety I think we triggered a full re-synchronize every day.
DirSync is not as real-time as change notification, but its possible to get ~30-second average latency by issuing the DirSync query once a minute.
Earlier this week the solution designer in my company recently issued a challenge to all developers to find a bug in some code that he wrote. What happened was the build admins retracted his solution package from SharePoint when something went wrong and the whole application pool died.
It is speculated that the cause is one of the following:
Something in the code
Was it somehow related to retracting the solution from Central Admin?
C – none of the above
D – all of the above
I've been looking at the code today, but have not been able to find anything askew. I thought I'd put his code here and get an opinion from anyone who is interested in giving one. This is not a serious question and the challenge was issued in good sport, so please just give it any attention if you feel so inclined.
The message from the SD:
As I mentioned in DevDays yesterday, when the SharePoint admins retracted the CityDepartments solution package from SharePoint, the whole IIS Application Pool died. This was before the OWSTIMER.EXE timer job could unlock all the Web Applications and sites that it locked before the actual features were deactivated.
So, since nobody knows what went wrong, I will reward the first person who figures it out (myself included).
As promised, if you can find the problem in one of the FeatureDeactivating events that causes SharePoint to crash spectacularly, like it did last week, then I will buy you 2 movie tickets to any NuMetro or SterKinekor movie that is showing at the moment. The proof will be in the successful deploy and retract (3 times just to be sure) of the fixed solution package(s) in the DEV and QA environments.
List Feature Event Receiver:
using System;
using System.Runtime.InteropServices;
using System.Security.Permissions;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Administration;
using Microsoft.SharePoint.Taxonomy;
using Microsoft.Office.DocumentManagement.MetadataNavigation;
using Microsoft.Office.Server.SocialData;
namespace CityDepartmentStructure.SharepointExtractTimerJob.Features.CityDepartmentsListFeature
{
/// <summary>
/// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade.
/// </summary>
/// <remarks>
/// The GUID attached to this class may be used during packaging and should not be modified.
/// </remarks>
[Guid("ce0a04a0-b20b-4587-998a-6817dce2d4d8")]
public class CityDepartmentsListFeatureEventReceiver : SPFeatureReceiver
{
public override void FeatureActivated(SPFeatureReceiverProperties properties)
{
try
{
SPServiceContext context = SPServiceContext.GetContext(SPServiceApplicationProxyGroup.Default, SPSiteSubscriptionIdentifier.Default);
SocialTagManager stm = new SocialTagManager(context);
TaxonomySession taxonomySession = stm.TaxonomySession;
TermStore termStore = taxonomySession.DefaultSiteCollectionTermStore;
Group termGroup = termStore.Groups["CityDepartments"];
TermSet termSet = termGroup.TermSets["Directorates"];
using (SPWeb web = properties.Feature.Parent as SPWeb)
{
web.Lists.Add("DepartmentSites", "This list maintains a list of web sites for all Org Units in CCT", SPListTemplateType.Links);
web.Update();
SPList departmentsList = web.Lists["DepartmentSites"];
TaxonomyField taxonomyField = departmentsList.Fields.CreateNewField("TaxonomyFieldType", "OrgLevel") as TaxonomyField;
taxonomyField.Description = "Org Unit in the Org Structure. Can be a Directorate, Department, Branch or Section.";
taxonomyField.SspId = termStore.Id;
taxonomyField.TermSetId = termSet.Id;
taxonomyField.AllowMultipleValues = false;
taxonomyField.CreateValuesInEditForm = false;
taxonomyField.Open = false;
taxonomyField.Group = "CCT Metadata Field Content Type";
taxonomyField.Required = true;
departmentsList.Fields.Add(taxonomyField);
TaxonomyField field = departmentsList.Fields["OrgLevel"] as TaxonomyField;
field.Title = "OrgLevel";
field.Update(true);
departmentsList.Update();
SPView view = departmentsList.DefaultView;
view.ViewFields.Add("OrgLevel");
view.Update();
var navigationField = departmentsList.Fields["OrgLevel"] as SPField;
MetadataNavigationSettings navigationSettings = MetadataNavigationSettings.GetMetadataNavigationSettings(departmentsList);
MetadataNavigationHierarchy navigationHierarchy = new MetadataNavigationHierarchy(navigationField);
navigationSettings.AddConfiguredHierarchy(navigationHierarchy);
MetadataNavigationSettings.SetMetadataNavigationSettings(departmentsList, navigationSettings, true);
departmentsList.Update();
MetadataNavigationKeyFilter navigationKeyFilter = new MetadataNavigationKeyFilter(navigationField);
navigationSettings.AddConfiguredKeyFilter(navigationKeyFilter);
MetadataNavigationSettings.SetMetadataNavigationSettings(departmentsList, navigationSettings, true);
departmentsList.Update();
}
}
catch (Exception ex)
{
throw ex;
}
}
public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
{
try
{
using (SPWeb web = properties.Feature.Parent as SPWeb)
{
SPList departmentsList = web.Lists["DepartmentSites"];
departmentsList.Delete();
}
}
catch (Exception ex)
{
throw ex;
}
}
}
}
Timer Job Feature Event Receiver:
using System;
using System.Runtime.InteropServices;
using System.Security.Permissions;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Administration;
using System.Linq;
namespace CityDepartmentStructure.SharepointExtractTimerJob.Features.Feature1
{
/// <summary>
/// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade.
/// </summary>
/// <remarks>
/// The GUID attached to this class may be used during packaging and should not be modified.
/// </remarks>
[Guid("10e80e0f-7be3-46f0-8a7f-fcf806ddf762")]
public class Feature1EventReceiver : SPFeatureReceiver
{
private const string JobName = "ExtractTimerJob";
public override void FeatureActivated(SPFeatureReceiverProperties properties)
{
SPService service = GetService();
// Remove job if it exists.
DeleteJobAndSettings(service);
// Create the job.
ExtractTimerJob job = new ExtractTimerJob(JobName, service);
// Create the schedule so that the job runs hourly, sometime
// during the first quarter of the hour.
SPHourlySchedule schedule = new SPHourlySchedule();
schedule.BeginMinute = 0;
schedule.EndMinute = 15;
job.Schedule = schedule;
job.Update();
// Configure the job.
ExtractTimerJobSettings jobSettings = new ExtractTimerJobSettings(service, Guid.NewGuid());
jobSettings.Name = "ExtractTimerJobSettings";
jobSettings.WebServiceLocation = "http://r3pci01.capetown.gov.za:8150/sap/zget_orgstruct";
jobSettings.Update(true);
}
public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
{
DeleteJobAndSettings(GetService());
}
private void DeleteJobAndSettings(SPService service)
{
// Find the job and delete it.
foreach (SPJobDefinition job in service.JobDefinitions)
{
if (job.Name == JobName)
{
job.Delete();
break;
}
}
// Delete the job's settings.
ExtractTimerJobSettings jobSettings = service.GetChild<ExtractTimerJobSettings>("ExtractTimerJobSettings");
if (jobSettings != null)
{
jobSettings.Delete();
}
}
private static SPService GetService()
{
// Get an instance of the SharePoint farm.
SPFarm farm = SPFarm.Local;
// Get an instance of the service.
var results = from s in farm.Services
where s.Name == "SPSearch4"
select s;
SPService service = results.First();
return service;
}
}
}
Without a logs it may be hard to figure out what exactly happened, but i would guess that Disposing Properties.Feature.Parent isn't good idea.
If you're doing this from powershell, it may lead to some problems, cause most probably it will always try to use the same object.
Next thing is, using what script you're deploying/retracting solution. Are you explicitly Activate/Deactivate feature every time?
More over - killing app pools is normal behavior for deployment and retract, but the problem with starting them again may be related to some kind of timeouts - for example, app pool start tries to occur before it will really turn off.
I was in the middle of implementing a database audit trail whereby CRUD operations performed through my controllers in my Web API project would serialize the old and new poco's and store their values for later retrieval (historical, rollback, etc...).
When I got it all working, I did not like how it made my controllers look during a POST because I ended up having to call SaveChanges() twice, once to get the ID for the inserted entity and then again to commit the audit record which needed to know that ID.
I set out to convert the project (still in its infancy) to use sequences instead of identity columns. This has the added bonus of further abstracting me from SQL Server, though that is not really an issue, but it also allows me to reduce the number of commits and lets me pull that logic out of the controller and stuff it into my service layer which abstracts my controllers from the repositories and lets me do work like this auditing in this "shim" layer.
Once the Sequence object was created and a stored procedure to expose it, I created the following class:
public class SequentialIdProvider : ISequentialIdProvider
{
private readonly IService<SequenceValue> _sequenceValueService;
public SequentialIdProvider(IService<SequenceValue> sequenceValueService)
{
_sequenceValueService = sequenceValueService;
}
public int GetNextId()
{
var value = _sequenceValueService.SelectQuery("GetSequenceIds #numberOfIds", new SqlParameter("numberOfIds", SqlDbType.Int) { Value = 1 }).ToList();
if (value.First() == null)
{
throw new Exception("Unable to retrieve the next id's from the sequence.");
}
return value.First().FirstValue;
}
public IList<int> GetNextIds(int numberOfIds)
{
var values = _sequenceValueService.SelectQuery("GetSequenceIds #numberOfIds", new SqlParameter("numberOfIds", SqlDbType.Int) { Value = numberOfIds }).ToList();
if (values.First() == null)
{
throw new Exception("Unable to retrieve the next id's from the sequence.");
}
var list = new List<int>();
for (var i = values.First().FirstValue; i <= values.First().LastValue; i++)
{
list.Add(i);
}
return list;
}
}
Which simply provides two ways to get IDs, a single and a range.
This all worked great during the first set of unit tests but as soon as I started testing it in a real world scenario, I quickly realized that a single call to GetNextId() would return the same value for the life of that context, until SaveChanges() is called, thus negating any real benefit.
I am not sure if there is a way around this short of creating a second context (not an option) or going old school ADO.NET and making direct SQL calls and use AutoMapper to get to the same net result. Neither of these are appeal to me so I am hoping someone else has an idea.
Don't know if this might help you, but this is how I did my audit log trail using code first.
The following is coded into a class inheriting from DbContext.
in my constructor I have the following
IObjectContextAdapter objectContextAdapter = (this as IObjectContextAdapter);
objectContextAdapter.ObjectContext.SavingChanges += SavingChanges;
This is my saving changes method wired up previously
void SavingChanges(object sender, EventArgs e) {
Debug.Assert(sender != null, "Sender can't be null");
Debug.Assert(sender is ObjectContext, "Sender not instance of ObjectContext");
ObjectContext context = (sender as ObjectContext);
IEnumerable<ObjectStateEntry> modifiedEntities = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified);
IEnumerable<ObjectStateEntry> addedEntities = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added);
addedEntities.ToList().ForEach(a => {
//Assign ids to objects that don't have
if (a.Entity is IIdentity && (a.Entity as IIdentity).Id == Guid.Empty)
(a.Entity as IIdentity).Id = Guid.NewGuid();
this.Set<AuditLogEntry>().Add(AuditLogEntryFactory(a, _AddedEntry));
});
modifiedEntities.ToList().ForEach(m => {
this.Set<AuditLogEntry>().Add(AuditLogEntryFactory(m, _ModifiedEntry));
});
}
And these are the methods used previosly to build up the audit log details
private AuditLogEntry AuditLogEntryFactory(ObjectStateEntry entry, string entryType) {
AuditLogEntry auditLogEntry = new AuditLogEntry() {
EntryDate = DateTime.Now,
EntryType = entryType,
Id = Guid.NewGuid(),
NewValues = AuditLogEntryNewValues(entry),
Table = entry.EntitySet.Name,
UserId = _UserId
};
if (entryType == _ModifiedEntry) auditLogEntry.OriginalValues = AuditLogEntryOriginalValues(entry);
return auditLogEntry;
}
/// <summary>
/// Creates a string of all modified properties for an entity.
/// </summary>
private string AuditLogEntryOriginalValues(ObjectStateEntry entry) {
StringBuilder stringBuilder = new StringBuilder();
entry.GetModifiedProperties().ToList().ForEach(m => {
stringBuilder.Append(String.Format("{0} = {1},", m, entry.OriginalValues[m]));
});
return stringBuilder.ToString();
}
/// <summary>
/// Creates a string of all modified properties' new values for an entity.
/// </summary>
private string AuditLogEntryNewValues(ObjectStateEntry entry) {
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0; i < entry.CurrentValues.FieldCount; i++) {
stringBuilder.Append(String.Format("{0} = {1},",
entry.CurrentValues.GetName(i), entry.CurrentValues.GetValue(i)));
}
return stringBuilder.ToString();
}
Hopefully this might point you into a direction that might help you solve your problem.
If I am logged into Windows 8 as a non-admin, such as the guest account, will this database creation code fail? If so, how can I change it works with non-admin users:
protected List<Order> orders;
string dbName;
#region Constructors
public RestaurantRepository()
{
Initialize();
}
protected void Initialize()
{
dbName = "db_sqlite-net.db3";
// check the database, if it doesn't exist, create it
CheckAndCreateDatabase(dbName);
}
#endregion
#region Database
protected string GetDBPath()
{
string dbRootPath = Windows.Storage.ApplicationData.Current.LocalFolder.Path;
return Path.Combine(dbRootPath, "menufinderwin8.db");
}
// This method checks to see if the database exists, and if it doesn't, it creates
// it and inserts some data
protected void CheckAndCreateDatabase(string dbName)
{
// create a connection object. if the database doesn't exist, it will create
// a blank database
SQLiteAsyncConnection conn = new SQLiteAsyncConnection(GetDBPath());
conn.CreateTableAsync<Order>();
conn.CreateTableAsync<Order>();
conn.CreateTableAsync<OrderDetail>();
conn.CreateTableAsync<Product>();
conn.CreateTableAsync<Restaurant>();
//using (SQLiteConnection db = new SQLiteConnection(GetDBPath()))
//{
// create the tables
// db.CreateTable<Order>();
//db.CreateTable<OrderDetail>();
//db.CreateTable<Product>();
//db.CreateTable<Restaurant>();
// close the connection
//db.Close();
//}
}
I think as long as the app is installed for all users, it should have access to its own local directory. I don't think you'd be able to share the database between the users, but I don't think it'd fail.
Let me know how it turns out, I'm curious :)