Architectural / Design question related to C# project, Azure and Office SharePoint 365 - c#

Need feedback and input. Here's a scenario. End goal is to update a custom list in Office 365 SharePoint site using data from external site. Note that SharePoint is not on-premise.
End-users fills out a form online at a 3rd party web site
-- Assume that your org does not want to drop this 3rd party web site for any reason
Form data is posted to an internal database
-- Assume DB is inaccessible from 3rd party site except via web form grid view and managed exports via CSV
Web Hook endpoint is configured to also send form data to receiver on MS Azure
Azure endpoint "receiver" gets and stores form data in Azure storage account queue
That's the first phase. How would you "feel" about the second phase being this scenario?
A web service coded in C# installed on a local server that periodically wakes up
-- Yes, I know that you can also store a C# developed service on Azure cloud. But let's assume that you are a frugal penny savings minded accountant in a previous life.
-- Yes, I'm also aware that a local service approach short-circuits the entire online path that the form data would otherwise take. But let's assume that at least one other person in the team agrees that locally, we have much more control with what we can do with the data, assuming that it's final destination as a new item a SharePoint list is just the tip of the iceberg.
Connects to Azure online storage
Downloads all queue items for processing
Saves queue items as new items in Office 365 SharePoint List
Dequeue items in Azure queue
Note: In this scenario, we're not taking advantage of Azure Queue Trigger which would require the second phase of the project code to reside in Azure also. Either as a Azure function or function app. Whichever the correct nomenclature is. The strategy for this approach is to save time from having to go through all the steps for coding authentication (OAuth or otherwise) to access the Office 365 SharePoint List. Lots of steps here and various choices for which SharePoint access methodology to use, i.e. REST, CSOM, etc.
Note2: A local service app in customer domain using local domain credentials (credential also added to Office 365 SharePoint site) will have trusted access to SharePoint list so there's no need to configure AD nor configure any certificates just for adding a new item to a SharePoint list.
-- Assume you already have some C# plumbing for Queue Trigger and that you can successfully read items in the queue. The only missing piece at this point is sending the data to Office 365 SharePoint online.
So, Yay or Nay? To help with understanding, what are the flaws in the second phase thinking/strategy and how would it negatively impact you if you were a developer having to maintain this solution in the event I am abducted by outer space aliens or fall into a very deep sinkhole that winds up somewhere on the other side of the planet, after the solution goes into production? Would you lay flowers on my cemetery headstone or spray paint graffiti on it?
What would you prefer, if any, to see as the second phase of the solution (that's all after storing form data into Azure Storage Queue?) (Keep in mind the monthly costs of Azure).

Related

How to limit usage of a tool to users within your company network?

We have a developer debugging tool to help manipulate security section of a database that our product depends on. This tool's purpose is to inject state into database to reduce time to create test scenarios. The database is not typical database that one can manipulate using sql. Rather it is a binary file that only our tool can manipulate. This is a C# application.
If this tool goes outside our company (say someone emailed it to a customer who shared it somewhere public), that could open lot of security issues.
We like to build intelligence into this tool so that it is usable within company or at partners network with whom we shared the tool. We have no knowledge of partner's network.
I am wondering what the suggested ways of implementing it?
Like:
Ping company active directory server or exchange server. Allow the tool usage if you can reach one of these servers.
Package a certificate with the tool that expires a month from build date. Always check if the cert expired or not before allowing usage of the tool.
Modification of (2). Make every user to request a key to unlock the tool after specific date.
Before we go implement a solution, I am wondering if there is already a library that does this.
Thanks
Assuming you host "file" inside your organization and all parties just access it somehow. If you give both data and tools to modify it to external partners there is nothing really to stop them to modify data as they pleased (short of legal/administrative actions but that is outside of SO scope).
There is also really not much you can do to protect code running on user's machine irrespective if it is C# or native compiled code. .Net code is a bit easier to modify/bypass protections but if you concerned about securing access to a file you need to protect files/servers rather than worry about client side code.
Usual solution to such problem - authentication and authorization: only allow authenticated users to access the file and only accept changes from authorized users.
If you use file based storage than inside your organization regular Windows domain accounts would work for authentication and regular file system permissions would work for authorization.
For outside partners you probably would need server to perform modification of the file(s) and authentication/authorization possibly using ADFS or Oauth.

Creating mail-enabled security group in Azure Active Directory or Exchange Online

I am using Azure Active Directory as a base for security/permissions to a Sharepoint 2013 portal. I developing an administration UI using Graph API to load and edit data from the Azure Active Directory. The setup also includes Exchange Online, such that each user defined in the system will be given a mailbox in my domain. The Groups and Users in the Azure Active Directory are synced to the Sharepoint 2013 and Exchange Online using DirSync.
The plan is to use the administration page to consolidate certain expected actions, including creation of new users, connect them to the relevant security groups, as well as creating new mail-enabled security groups.
As is described in the Graph API Group Overview, only "pure" security groups are allowed to be created. Further more, the Graph API does not allow the mailEnabled field to be updated to true after creation... In fact, the Azure Active Directory management screen is so limited that a mail-enabled Group cannot be created there either (or am I missing something?).
I am trying to find a solution that will let me still to consolidate all of the actions that I wish to allow under one administration application.
It seems that PowerShell might be an option, though I am not exactly well-versed in using PowerShell.
I have tried to find an API that would allow me to connect to Exchange Online and perform similar actions - DirSync seems to sync everything from there to Azure Active Directory just fine - I create a mail-enable security group in Exchange and I get a mail-enabled security group in Azure Active Directory moments later - I have not been able to find such an API. Does it exist?
Am I looking at it all wrong? Is there a simply solution to my needs as stated above that I am simply not aware of?
Doesn't sound to me like you're missing something. The thing is, mail-enabled security groups only make sense if you have something providing the "mail". For Microsoft's online services, this would be Exchange. Azure AD and "pure" security groups apply everywhere, but one could argue mail-enabled group only applies if Exchange is in the picture. Anyway, that's an attempt to explain the why.
As to programmatically accessing Exchange, you have many options (see Exchange Online and Exchange 2013 development (in no particular order):
a) Exchange PowerShell cmdlets
Exchange allows you to programatically create and manage distribution groups with the Exchange Online PowerShell cmdlets. For example, the New-DistributionGroup cmdlet:
New-DistributionGroup -Name "My Favorite People" -Type "Security"
As you've noticed, these groups will be synced back to Azure AD as read-only. You can then use them (for example) with Azure AD Graph API to do RBAC based on group membership.
a.1) Exchange PowerShell cmdlets run from C# code
If you want to run these cmdlets from other .NET code (e.g. C#), it is doable. Take a look at the example in How to: Get a list of mail users by using the Exchange Management Shell. It can easily be used to get/set groups instead of users.
b) Exchange Web Service and EWS Managed API
Your best bet at this point might be to use EWS Management API:
Get started with EWS client applications
Get started with EWS Managed API client applications
c) Office 365 API (although this scenario is not currently supported)
Keep an eye on the Office 365 API, which doesn't look like is support this scenario yet, but looks promising: http://msdn.microsoft.com/en-us/library/office/dn605892(v=office.15).aspx

security trimming for end user for SharePoint search result from a system user

Basically we have an iOS app and also a Server application. iOS app would like to get some search result for the end user from SharePoint, so iOS app would submit search request to our server application which would perform the search by using web services (client API) on SharePoint by using a system account. The problem for us is to how to do the security trimming of search results for end user on iOS app.
We could send up the end user's user name and password from iOS app to our server and use that credential to call the search web services, however it is not desirable to do so. I also looked the custom security trimming (CST, Trim SharePoint Search Results for Better Security at SharePoint, it looks like that it won't solve our problem since at ISecurityTrimmer2:CheckAccess method, there is the claims for user who performed the query which is a system user, however we would need to be able to call our application back to find out the real user which seems not practical here:
public BitArray CheckAccess(IListdocumentCrawlUrls, IDictionarysessionProperties, IIdentitypassedUserIdentity)
Any other thoughts about how we could implement the trimming here for our situation? It is really appreciated for any suggestions.
By the way, the iOS app does use the SharePoint Web Services directly, however we want to avoid that since we want the API for search to be generic and not be limited to SharePoint, we are also thinking about sending the authentication header/cookie from iOS app to server for search, however it seems to me that it might not be a good idea and could be think of a security hack.
Update 1:
based on this post: Security trimming in search web service and Creating a Custom Pre-Security Trimmer for SharePoint 2013, there is ISecurityTrimmerPre which we could add more claims to the user, in theory, it could call back our web services to add the claims the correct end user although I don't really know how it would know which end user this search is for if there are multiple searches going on the same time. Another worry is that it seems that we would affect general SharePoint search as well which we don't want to do since we are adding ourselves to the search query pipeline.

How to save user data in C# Winstore Metro app permanently?

I have some user data in my Winstore C# app. I just learnt that local store and app data are being erased after new version of the app is installed (or user reinstalled the app).
How to permanently store the data? Can it be done transparently for the user?
What about enterprise class of apps - how do you guys access more robust data like databases?
Removing all local data when an app is uninstalled is the expected pattern for Windows Store apps.
If you want to store data permanently, my recommendation is that you consider building a back-end data store and services to access them. Then you control the server-side data, and can associate the data with the users when they install your app (note that if you plan to store data and not delete it when the user uninstalls the app, you should probably call that out in your app's privacy policy).
There are several good options in terms of building back-end services, and I explore several of them in a blog series I'm currently working on:
http://bitly.com/bundles/devhammer/2
The series covers building a back-end game leaderboard service which stores data in a SQL Database on Windows Azure (though the concepts are applicable to services you host yourself as well), using one of 3 stacks:
WCF Data Services
ASP.NET Web API
Windows Azure Mobile Services
Any of those three stacks will allow you to create a robust back-end for your apps, and can be leveraged across platforms.
With respect to transparency, you can definitely make the above services functionally transparent to the user, but as noted above, it's a good idea to also be transparent about the fact that you plan to continue to store data after the app is uninstalled, and perhaps even give the user options for deleting their data. Pete Brown recently posted a good overview of traits of a good Windows Store app privacy policy, and addresses this a bit in the post:
http://10rem.net/blog/2013/01/21/traits-of-a-good-windows-store-app-privacy-policy
For more info on Windows Store app development, register for Generation App.
You can use something like Skydrive or Dropbox to store the files.
EDIT*
There is no database access support in WinRT. While you can use something like SQLite to store data locally - it would be used mostly for caching and it would be expected that you persist the data somewhere in the cloud, so you should still upload the data you want stored somewhere outside of your machine.
If you want to store files on your machine that don't get deleted with your app - you can save them somewhere in the documents/pictures/music/videos libraries, depending on where they fit best.

WCF -> ILM -> Web Services -> SQL Server

My employer currently has most of its access to the database through C# sqlDataAdapters and sqlCommands on a WebServices server or components in applications. These are mostly Windows Forms apps that are ran on intranet and internet depending on their functionality.
I have been researching WCF quite a bit and I feel it would be a good fit for our us. Also my manager has a copy of ILM(MS Identity Lifecycle Management Server) that he would like to use to to provide SSO support for authentication and authorization for all of our applications.
Our applications request data from the database and it is returned in dataTables primarily. I know collections are better, it is just the established practice used. So I am trying to find a solution that will be secure, authenticate through ILM and return data to the client in a dataset(at first, migrate to collections later) from webServices server.
My question is will this work or will it be too slow?
Client calls routine on WCF requesting data
WCF server checks with ILM to see if its ok to do so
WCF calls webServices server to get the data
Dataset or collection is passed back to the client.
If this is feasible how would I go about connecting to ILM for authentication. Is there a slick way to do it in the Web.Config file or would I have to do it on the message level on my own?
Thanks in advance.
I am familiar with ILM. It's not an authentication service. ILM means Identity Lifecule Manager and it's a pretty good description of what it can do. It can provision new users, deprovision old users and allows you to copy identity data between identity stores. It also provides a password synchronisation service. You still use Active Directory or AD LDS (ex-ADAM) or some other directory for AuthN and AuthZ.
Whilst ILM stores a whole load of data about your users, you are strongly discouraged from accessing that data directly.
[EDIT]
ILM does not provide LDAP services. Think of it as a manager: it doesn't do any work itself, it just rearranages things periodically. As your manager moves round data in the form of emails, it moves round data in the form of account details.
ILM is a tool for managing identities across directories and databases. It doesn't make sense to consider ILM in the context of a single store, SQL, AD or any other - its job is to marshall data between stores. It wouldn't have anything to do if there was only a single store.
Here's a typical scenario: you create a SQL table called People containing columns for firstName, lastName, jobTitle, department, a uniqueID, startDate and endDate. ILM is hooked into this table. It does a daily import and there is a new row. ILM uses the data in this row to create a userID in AD, another in Domino and another in a different SQL Database. It uses the jobTitle and department fields to assign group membership in AD, mailing lists in Domino and permissions in SQL.
The user starts and works for a few weeks and then resigns. You set the endDate in the table and ILM notices this change on its next import. It updates the AD account to expire on that date and stores a delayed action to delete it after 90 days. 90 days later, it deletes the account. Likewise with the other accounts.
You can use your personnel system instead of the SQL table but (a) it's not usually in the right format or maintained timely enough and (b) they're often itchy about letting you have access to their data.
I'm not hugely familiar with ILM, but I'm guessing that it is pretty granular to specific data queries. With WCF you can hook your own identity-provider by implementing IAuthorizationPolicy (like so) and providing your own "principal". I suspect it would be pretty easy to write a principal that works against ILM, but this would presumably be for pretty broad checks - "do I have CustomerAudit access", rather than "can I access CustomerAudit for customers in the north-east".
A nice thing about using a principal is that you can also use [PrincipalPermission] checks on methods, so you don't need to add broad security checks to your code (the CLR enforces [PrincipalPermission] directly).

Categories