I want to develop PaaS like for IIS, I want users to be able to upload dll and I will host them. Those dll's will be ServiceStack services.
I want to sandbox those apis, so they can access the internet - only to certain ip's , but to limit their OS access for file system - Only for some directory that have max limit of 20mb (for example).
I know that in linux I have containers like docker that can help me do that easily, any Ideas how to do this in .NET (open source libraries for helping are more than welcomed :))
You could use Code Access Security and Trust levels for that.
Take a look at:
http://msdn.microsoft.com/en-us/library/wyts434y(v=vs.100).aspx
http://msdn.microsoft.com/en-us/library/87x8e4d1(v=vs.100).aspx
Related
I have a c# console application running as a web job in Azure PaaS. Since it is a legacy system and use local UNC path to put the generated pdf, I am exploring the ways I can do this on Azure Storage. Following this, I have created a storage account then File share and finally a directory inside file share. I can access the directory from windows machine by entering the login credentials. so I know the storage is all set and working. Now I want to replace the UNC path in my c# code with the UNC(?) path on Azure PaaS but I am wondering if that would work and if yes then how I should handle the credentials? Since Microsoft says that File Share supports SMB 3.0 I reckon I should be able to use it just the way I use any on premises drive. I do not want to use REST api's to do the file operations as defined here and in the video here because it would involve code changes which in my case would be a huge exercise. Since File share supports SMB protocol I was expecting to find examples where it is called from a web job. Can somebody point me to the right resource or guide me how I can accomplish this piece of functionality.
Here's your problem -
From the App Service sandbox Wiki -
Restricted Outgoing Ports
Regardless of address, applications cannot connect to anywhere using ports 445, 137, 138, and 139. In other words, even if connecting to a non-private IP address or the address of a virtual network, connections to ports 445, 137, 138, and 139 are not permitted.
That's largely SMB traffic.
Your options are limited, i would try to publish on Cloud Services instead (worker role), still PaaS but with a vintage feel to it and no outbound port restrictions.
Service Fabric with Guest Executable programming model could also be an option, although it's probably a little too involved for a simple console app. Pick Windows nodes for .NET Full Framework.
At work we use Active Directory.
Active Directory (AD) is a directory service that Microsoft developed
for Windows domain networks. It is included in most Windows Server
operating systems as a set of processes and services. Initially,
Active Directory was only in charge of centralized domain management.
Starting with Windows Server 2008, however, Active Directory became an
umbrella title for a broad range of directory-based identity-related
services.(source)
My first goal is to create a small lightweight windows forms application to search on firstname and lastname, using only parts of the name.
Context: I like some functionalities of this tool. But it is not open source. And I need to make changes.
And since StackOverflow does not encourage reverse engineering, I would rather write my own software ;-)
Being a Microsoft .NET developer, I started looking for Microsoft .NET libraries: Which are they?
Is this it? Or is this it? I can't seem to find any really useful tutorials or easy to use API documentation.
I have read this: Using LDAP and Active Directory with C# 101
Which shares this. But that is basically nothing much.
Hence my question: Do you know whether there is open source C# code available for download? Which supports connecting to active directory, and performing LDAP operations?
Novell is a software company providing a similar but competitive directory, and they seem to provide excellent documentation about their 'LDAP Libraries for C sharp' This got me wondering:
Could this be used to connect to 'Establish an LDAP Connection' and 'Perform LDAP Operations and Obtain Results' on Active Directory? Or should I just use Microsoft code?
Considering AD is a directory services database, and LDAP (Lightweight Directory Access Protocol) is one of the protocols you can use to talk to it.
It should be possible right?
Please share your knowledge and experience with me. I am total n00b when it comes to AD. I promise: once you get me started, I will share my beautiful code with you all.
You can try using this Novell based repository.
I've been writing desktop apps in C# for some time now but I'm increasingly getting frustrated with the fact that not everyone has .NET 2 or Higher installed. I don't have the option of upgrading their systems to meet my needs. My apps are mostly utilities that run alongside the main program the company I work for has. They access the file system and the registry. Being relatively new to programming in general, I was wondering if moving these tools to the web would solve some of my problems. But I have no idea if web apps can have access to these parts of Windows. I was thinking of writing these web apps in either Rails or ASP.NET. So my question is this. Can a web app access and modify the registry and file system of Windows?
Thanks.
Nope, "web apps" like asp.net or rails apps run on the server alone and just serve html to the client. So all the client-side code can do is what jscript running in the browser sandbox can do, ie no file access or registry access.
You can however install an activex on the client computer that gets full access, but the user has to agree to install it as it's a security risk.
Writing the apps as Web apps instead (and Rails is cool to use) is a good option - your users don't need to install anything, upgrades are easy to do, and dependancies are no longer a problem.
However, you now need to start re-architecting your apps so they do not need to write anything to the client, except a cookie (that's stored in the browser). If you can do this, then migrating to a webapp will be great.
If you cannot, my advice is to learn the same language that your company's app is written in. Once you do that, the company app will have taken care of the dependencies already and you will just need to offer your utilities alongside the app, perhaps even in the installer, or just to copy the files into a subdirectory. If you're thinking of learning Ruby, then learning the corporate language will be just as difficult (only you'll be able to reuse a lot of code used in the main app)
No, a traditional asp.net application cannot access the file system or registry on the windows box. Simply put because it doesn't actually run on the client machine. Instead it runs on the server where it does not have access to the local machine.
It is possible to have portions of the application which run on the client machine. Browser based applications for instance. However these would require that the 2.0 framework be installed on the customers machine which puts you right back at square #1.
No, this isn't possible. Web applications cannot modify the registry and/or file system on a user's machine because of the security implications. You would need to develop a Windows app to do these kind of changes. You could always make this tool available for download on your website though.
No, you can't do that with a web application. Besides others have already said, a web application run in a browser, not inside an operating system, so all you can do is what browsers allows you to do and not all you want, and browsers doesn't allows you to take control of the host machine.
I'm guessing the desktop app used in your company uses the registry to store workstation / user specific (state)data.
Moving to a web based app does not mean storing state data is no longer possible, just account for it by including a table in your database that can be used to save that same (state)data in. The registry is no longer needed.
Another pro is that by moving to a fully webbased application, you never have to worry about your endusers, because the code is running on the server, all the enduser gets is the output in html :-D.
The only thing to keep in mind is cross browser compatibility, don't create an app that works in IE only for instance, it has to look and work the same in all major browsers.
There are a few products out there, such as Xenocode and VMWare's ThinApp, that allow you to virtualize your app's dependencies to the point where your .NET app can run on a machine without the .NET Framework installed. Just another option from left field.
I'm working on a graduation project for one of my university courses, and I need find some place to run several crawlers I wrote in C# from. With no web hosting experience, I'm a bit lost. Is this something that any site allows? Do I need a special host that gives more access to the server? The crawler is a simple app that does its work, then periodically writes information to a remote database.
A web crawler is a simulation of a normal user. It acess sites like browsers do, getting the html code (javascript, etc.) returned from the server (so no internal access to server code). Being that, any site can be crawled.
Be aware of some web crawler ethics guidelines. There are pages you shouldn't index or follow its links. And web developers build some files and instructions to web crawlers, saying what you can index or follow.
If you can't run it off your desktop for some reason, you'll need a host that lets you execute arbitrary C# code. Most cheap web servers don't do this due to the potential security implications, since there will be several other people running on the same server.
This means you'll need to be on a server where you have your own OS. Either a VPS - Virtual Private Server, where virtualization is used to give you your own OS but share the hardware - or your own dedicated server, where you have both the hardware and software to yourself.
Note that if you're running on a server that's shared in any way, you'll need to make sure to throttle yourself so as to not cause problems for your neighbors; your primary issue will be not using too much CPU or bandwidth. This isn't just for politeness - most web hosts will suspend your hosting if you're causing problems on their network, such as denying the other users of the hardware you're on resources by consuming them all yourself. You can usually burst higher usage levels, but they'll cut you off if you sustain them for a significant period of time.
This doesn't seem to have anything to do with web hosting. You just need a machine with an internet connection and a database server.
I'd check with your university if I were you. At least in my time, a lot was possible to arrange in-house when it came to graduation projects.
Failing that, you could look into a simple VPS (Virtual Private Server) account. Unless you are sure your app runs under Mono, you will need a Windows one. The resource limits are usually a lot lower than you'd get from a dedicated server, but they're relatively affordable. Some will offer a MS SQL Server database you can use next to the VPS account (on another machine). Installing SQL Server on the VPS itself can be a problem license wise.
Make sure you check the terms of usage before you open an account, as well as the (virtual) system specs though. Also check if there is some kind of minimum contract period. Sometimes this can be longer than a single month, especially if there is no setup fee.
If at all possible, find a host that's geographically close to you. A server on the other side of the world can get a little annoying to access remotely using Remote Desktop.
80legs lets you use their crawlers to process millions of web pages with your own program.
The rates are:
$2.00 per million pages
$0.03 per CPU-hour
They claim to crawl 2 billion web pages a day.
You will need a VPS(Virtual private server) or a full on dedicated server. Crawlers are nothing more then applications that "crawl" the internet. While you could set up a web site to be a crawler, it is not practical because the web page would have to be accessed for you crawler to work. You will have to read the ToS(Terms of service) for the host to see what the terms are for usage. Some of the lower prices hosts will cut your connection with a reason of "negatively impacting the network" if you try to use to much bandwidth even though they have given you plenty to use.
VPS are around $30-80 for a linux server and $60+ for a windows server.
Dedicated services run $100+ for both linux and windows servers.
You don't need any web hosting to run your spider. Just ask for a PC with web connection that can act as a dedicated server,configure the database and run the crawler from there.
I'm architecting a WPF application using the PnP Composite Application Guidance. The application will be run locally, within our intranet.
Modules will be loaded dynamically based on user roles. The modules must therefore be accessible to the application through a network share, thus accessible from the client machines.
What I'd like to do is keep all the module .dlls in a location not accessible to staff, but still be able to provide them to the composite application when demanded and when the current user is authenticated to use that module.
My thought is to load the .dlls by streaming them down from a WCF service, where the WCF service (on the server) can access the .dll repository, but none of the client machines can access it. Authentication would also be handled by the service.
I suspect that I might be overcomplicating things somehow.
Is this something that can be done with a simple filesystem configuration and programmatically passing credentials when accessing the shared folder? If I do this, would access only be granted to the calling application, or would the logged-on user now be able to navigate to the shared folder?
Is this, in any way, a solved problem with MEF or any other project of which you're aware? (I hope this isn't LMGTFY-worthy -- I haven't been able to come up with anything.)
At Argonne National Laboratory we keep all sharable DLL and other objects (.INI files, PowerBuilder PBD libraries, application software, etc.) on a simple and internally public file server and objects are being downloaded over the network on a per need basis as defined by each client/server application. Thus we are minimizing the maintenance of middleware (Oracle Client, PowerBuilder, Java, Microsoft, ODBC, etc.) to a single file server location with basically no software installed on the end user PC. Typically we physically download less than a few KB Registry Keys to the individual end user PC; this includes the full Oracle Client, which if installed on the PC alone would take up 650+ MB disk space and several thousand Registry Keys, and costly to maintain on the enterprise. Instead our Oracle Client on the PC is about 17KB.
The only “software" on the client side are Registry Keys containing variables pointing to server locations (f.ex. ORACLE_HOME: \<server name>\ORACLE\v10\Ora10g ).
This has been a very cost effective solution we have been using for 10+ years, making all middleware and application software upgrades totally transparent to more than 2000 users Lab wide. Over the years we have done thousands of object upgrades on the central file server without ever having to install a single upgrade on the end user Desktop. Although this has some risks (“thou shall not copy DLLs over the network”, etc.) and is a heavily customized solution, it has worked flawlessly for us throughout for a large number of applications and middleware.
This is a somewhat surprisingly simple solution in today’s advanced technology, but it has been totally efficient and cost effective for us. Several vendors (Citrix and others) have looked at our solution somewhat perplexed, but every vendor of deployment techniques who have seen our deployment has come to the same conclusion, basically: “you do not need us”.
when loading modules you need to keep in mind that:
Once loaded, an assembly can't be unloaded (unless you unload the entire application domain) - so if users can log in and out using the same instance, you may have a problem.
"the load context" matters (see http://blogs.msdn.com/suzcook/archive/2003/05/29/57143.aspx) - this may cause problems if you have dependencies between modules or dependencies on assemblies that are not in the "load context"
If the restricted access to dlls is due to a licensing issue, maybe you need to refine the licensing mechanism somehow (not tie it to access to the actual code, but to some other checks)?