Currently with my company we are working with a third party software for multi-store fronts (portals) called ZNode. We have a couple portals/stores set up and the web team we have has set up the portals to use different headers/footers/navigation. The url's for the stores are different in our dev environment (store1.com/shoppingcart.aspx vs store2.com/shoppingcart.aspx) and they have different views of the same page and follow the same navigation set up by the different buttons within the store.
The problem though is that I need to troubleshoot problems for store2.com/shoppingcart.aspx but when I try to do it on the localhost through visual studio's I move between store1 (the main store) and store2 (the one I need to troubleshoot) and I don't know how to stay on store2's portal because when I run the page/project locally through VS the url is of course a localhost link with a port number so I can't stay on the second portal like I need to. I just move between them when I reach a page that is one portal and not the other. And when I reach a page that is shared (like shoppingcart.aspx) I move onto the main store/portal.
I didn't know if anyone had some insight into how I could possibly stay on one and not move between them, if you do thought I would be very interested to try any ideas anyone might have on how to stay on one portal or the other. I have asked tech support for the third party vendor but have not heard back yet and I would like to get a jump on trying things while I wait for a response that could take, potentially, a few days.
Currently, we have to the localhost dev site set up through IIS to run on a certain port with the authentication of our active directly log in. Don't know if this helps or not.
Sounds like you need to create multiple bindings to the same site in iis in your dev machine. Like store1.com and store2.com. Then map both to your local box in the hosts file (system32/drivers/etc/hosts)
127.0.0.1 store1.com
127.0.0.1 store2.com
This way your local iis will answer to requests to both domains from the same site.
Related
When I use ManagementObjectSearcher("SELECT * FROM Win32_PhysicalMedia") in a webpage, it provides server informations. But I need computers information of any user that is using the website. I prepared an website including Win32_PhysicalMedia query. However it returns only the server computers information. Is it possible to get personal computers information from a web site?
Thanks.
Well of course any computer code running on the web server can get information about the web server that running that code.
However, the client side?
that is a browser, and what happens if you running a iPad, or Andriod phone? Such information about the client side is not possbile.
Why?
Well, when you come to my web site to view a cute cat picture? I can't have code run on YOUR computer that mucks around, looks for a document called "my passwords", or looks for files called banking. Or how about I mess around and steal all your family pictures?
If you could do that, the internet would be the worlds worst system and it would repsrent no security at all.
So, information about the client side computer that you can get?
Well, you can get information about what kind of browser they are using. and some Javascript on that web page can get say the current screen resolution (or better said the size of the browser window).
but, things like what OS, the hardware, and all your banking information and files and things about that computer? Nope, that is 100% hands off, and browser are VERY much securied, and are what we called sandboxed VERY tight. For example, while you can drop say a file-Upload control into a web page?
and that control will let the USER click on that button, and choose a file? You can't EVEN in JavaScript (browser side, client side script running) SET or CHOOSE or PICK the file name!!! - in other words, even that browser control is secured by NOT allowing you the develop to pick a local file. Again, this is for reasons of secuirty, since again as noted, while you look at my web site cute cat pictures, you can't have browser code running around and grabbing your files from YOUR computer while you are looking at MY WEB site!!!!
So, browser code can't even select a file to up-load. The user MUST select the file, and then when you up-load that file, ONLY the file name is passed along with the file data (not even the file path name to the file on YOUR computer is sent to the web server. Again, this is all locked down for reasons of security.
If you need information about the client side computer, then you would have to provide a user with a program to download. They would run the desktop program, it would then do your type of hardware select query, gather the information and THEN send the data up to your web site (I guess you would create a web method, or so called web API for that desktop program to call and send the data to). Of course, you would need a program written for Android phones, apple phones, windows desktop, Apple desktops and even more choices.
So, web based software is VERY much locked down, and has ZERO abiliity to get information about the client side computer. As noted, you can in JavaScript get information about what browser the user is using. (but then again, say chrome runs on Android and desktop). And you can as noted get the current window size, which in most cases will tell you what screen resolution. And a good many browsers will also allow you to get the users current zoom level (I been wanting to track this information for a very long time - since after 125% zoom, some of my web pages look like garbage).
So, you have to grasp how the web and this supposed fad called the internet works. The client side, or client desktop computers (or phones) that interact with your web site are using a browser, and as noted, those browsers are VERY much locked down, and this is of course for good reason - that reason being security, and thus I can sleep well at night, that when I go to visit YOUR web site with a cute cat picture, your not then running code on my computer that is pocking and peeking around with my personal computer that is MY computer, and not YOUR computer.
I need to find a way to block user access to my database that will be installed in his pc.
So, here on the company we have a problem. We need to block user access to our database that will be installed on their pc, what I mean by this is...
We have 2 softwares. A web App ERP and an instalable finances App.
We reached the conclusion that it was unnecessary to have 2 standalone apps, and that we should put the finances app inside our ERP.
But this comes with a problem, theres a big part of our users that don't trust the web, and web apps, they think that what is on their pc is what is
safe, and is where it should be.
We don't want to maintain the 2 standalone softwares needlessly.
We asked our users if they'd be happy with a progressive web app, their answer was the same.
Then we tried to make a way to run our ERP on their pc whilst offline, as an executable, but that comes with a lot of troubles, we need to install IIS, PostgreSQL, .net frameworks, pgadmin, our metadata database (which it shouldn't be accessible in any way shape or form by the user!), etc... that lets our app run on the users pc.
Of course we don't want to do that, but we got no choice left. We need to at least block our metadata database from being accessed, since the whole structure of the web app is there and we don't want to share it with the competition
Our solution was installing all that was needed inside a virtual drive and run the app from there. but all the files and databases are available to the user for him to mess with.
How can we restrict acess to that virtual drive the best possible, and protect our intelligence property? is it even feasable? I've run out of ideas and don't know what else to do, so any help is welcome.
Should I take another route or is it a lost cause?
Whoever has control of the database machine has control of the database. So if the database is running on the client's machine, there is no way to keep an administrative user out of the database.
So if the users don't trust a web application, they will have to trust their system administrators (or themselves, if they have administrator rights to their machines).
We have a scenario where we use multiple Web apps in Azure, when scaling up I understand Azure is simply starting more Web processes and as such allows for connections to multiple servers, there's a broadcast system in place for the synchronization. The issue is, what happens to an open socket if we manually or automatically scales down? Say we have 5 servers which each have an open web socket, and we scale down to 1, what will happen to the 4 sockets that were connected to the servers that are being removed?
As a side note, if they stick and keep up until the socket is disconnected to the client, will Azure bill me for that time?
If they don't stick, it's only a matter of making sure the client reconnects properly.
By what I've seen so far, it seems to stick, but that might just be a grace period while it's scaling down so I'd rather be on the sure here with an answer from someone who actually knows.
From another thread a few years ago it was the newest instance that is removed (most of the time) but I cannot find anything about it waiting for the connections to drop.
Which instances are stopped when I scale my Azure role down?
There is however a management API that you can access to scale down (delete) specific cloud service roles.
The Delete Role Instances operation deletes multiple role instances
from a deployment in a cloud service.
POST Request
https://management.core.windows.net/<subscription-id>/services/hostedservices/<cloudservice-name>/deployments/<deployment-name>/roleinstances/
Using this you can monitor which instances you want to remove and send the delete command programmatically. That way you could wait for the users to cleanly disconnect from the instance before deleting it.
Reference to the Microsoft API doc for this:
https://msdn.microsoft.com/library/azure/dn469418.aspx
So, after quite a bit of development and testing there's an answer. We're deploying using Kudu so Azure builds and publishes the Web app. The IIS instances that have open Websockets will run it's Application_End cycle and shut down the TCP connection.
As far as i can tell so far, this happens before the new site is spun up and accepts connections. Thus, there is no fear of being billed for additional hours. This also seem to occur for all Web apps (sites) within the plan when scaling the Web app plan (server farm), no matter if it's up or down.
This might perhaps an inconvenience for our users, but with proper shutdown on the server side and re-connection from the client side it should work out just fine.
There's only one way to find out.. test it. (or ask an Azure engineer, but this might take ages..)
I would assume that it would not scale down a machine if someone is connected. Imagine watching a stream and it randomly stops to connect to another server? I wouldn't think Microsoft would create it to drop connections.
The first paragraph mentions web roles:
On the Scale page of the Azure Management Portal, you can manually scale your application or you can set parameters to automatically scale it. You can scale applications that are running Web Roles, Worker Roles, or Virtual Machines. To scale an application that is running instances of Web Roles or Worker Roles, you add or remove role instances to accommodate the work load.
Web apps or web roles require the use of a VM. This is detailed in the first bullet point listed:
You should consider the following information before you configure scaling for your application:
•You must add Virtual Machines that you create to an availability set to scale an application that uses them. The Virtual Machines that you add can be initially turned on or turned off, but they will be turned on in a scale-up action and turned off in a scale-down action. For more information about Virtual Machines and availability sets, see Manage the Availability of Virtual Machines.
The information that follows the bullet points details the scaling process.
For additional information this link also mentions the use of vm's for web apps. The verbiage below can be found in the section titled Web App Concepts:
Auto Scaling - Web Apps enables you to quickly scale-up or out to handle any incoming customer load. Manually select the number and size of VMs or set up auto-scaling to scale your servers based on load or schedule.
https://azure.microsoft.com/en-us/documentation/articles/app-service-web-overview/
I am still experimenting with Azure multi tenant development. I now have my first trial thingy, but in order to use subdomain names (customer.site.com) I need to switch my Azure website to shared/reserved. Since I am still experimenting, I rather not start paying for Azure. Is there a way around this? Or, is it possible to test the multi-tenant part in my local visual studio webserver?
No, you can't have custom domain names with FREE websites.
But what you could do, is to switch the tenant recognition from sub-domain to a path. So instead of having tenant10.site.com/ you would have mysites.azurewebsites.net/tenant10/. That would basically be just a change in URL Rewrite rules - which I think is the right way to handle multi-tenancy recognition at URL level. And URL rewrite is supported in Azure WebSites as well Azure Cloud Services.
Testing the multi-tenancy locally is even easier. You just open your hosts file (in a typical windows installation it is located in c:\windows\system32\drivers\etc\hosts. Just add entries for all (sub)domains you want to test, and map them to 127.0.0.1. Something like:
127.0.0.1 tenant1.mydomain.com
127.0.0.1 tenant2.mydomain.com
127.0.0.1 tenant15.mydomain.com
...
Then, run your project with F5 like you would normally do and manually type in the new address in browser's address bar: tenant1.mydomain.com.
However first launch the project with F5 to check the real IP address of local development fabric, because sometimes it may not be 127.0.0.1, but 127.0.0.8 or something else. The IP address that is used in your browser's initial launch is the IP Address you have to fill in your hosts file.
However, if you work with real (sub)domain names in hosts file, do never forget to remove the entries from it, as you will never reach the real Internet sites.
I'm trying to ensure that the server I'm currently running a piece of code on is a web front end server. I thought it might be as simple as this:
SPServer.Local.Role.ToString().Equals("WebFrontEnd")
However, if you are running your WFE in addition to app servers, etc on the same box, this will return "Application" and fail to correctly identify it as a web front end.
My idea is that by determining if the Microsoft SharePoint Foundation Workflow Web Application (service) is started and running on the server. This can be determined by going to Central Admin > System Settings > Manage Services on Server.
I need to do this programatically in C#. I'm fairly sure that these services and their statuses can be obtained via powershell which is a viable solution, but I'm not sure how to do it either way.
EDIT -- I'm aware of a way to loop through "services" using the following code:
SPServiceCollection services = SPFarm.Local.Services;
foreach (SPService service in services) {
}
However, this includes some items that look suspiciously similar to the list under "Services on Server" but are all listed with a status of "Online" and dont seem to include this.
I'm not on a machine to check, but I've a feeling you'll have more luck with SPServer.Local.ServiceInstances - that sounds like it should give the services on the server in particular rather than the farm in general.
Unfortunately not even that is reliable, as the Foundation Web Application service can be running on servers that are not actually front end servers (i.e. the load balancer never directs traffic to them). Ultimately it is the list of servers that the load balancing mechanism has that determines which servers are true front ends, and because there's various types of load balancers and no single interface to all of them I don't believe there is any one guaranteed method of determining the number of true web front ends in a farm.