VS 2008 web services generating wrong url - c#

Haven't done much web service development so this is probably an easy fix. Created both a mobile device project and web service project. When I add the service to my mobile project the soap document method attribute is wrong. All other info is correct. It generates service/method when it should actually be service?op= method. it will not work unless I manually change this url

You can try changing the URL you are using from http://whatevertoyourwebservice.asmx to http://localhost/whateverwebservice.asmx If you have the ability to use localhost this will work without having to change the url in the app.config folder. the other way is to use JavaScript to call your web service.
Here is an example is use for UTCTime webservice:
var portalUrl = window.location.href.substring(0, window.location.href.indexOf('/', 8));
var serviceUrl = portalUrl + "/your_webservice_location/";
var utcTimeOffsets = [];
function GetUtcOffsets(timezones, func) {
var proxy = new ServiceProxy(serviceUrl);
proxy.isWcf = false;
proxy.invoke("GetUTCOffsets",
{ tzName: timezones },
function(result) {
utcTimeOffsets = result;
if (func) func();
},
function(error, i, request) {
alert(error);
//setTimeout(function() { GetUTC(location) }, 1000);
},
false);
}
Other then that there is going to be no way, that I have found anyways, to make the C# webservice call dynamic. If you find a way please let me know. I know that these two options will work.

Related

https scheme issue with Swashbuckle using Proxied OWIN self-hosted Web API

I have been working on a web api in C# using the self-hosted owin web api which is proxied.
I have setup the Swashbuckle.Core package, It all works fine. I am just having one slight issue with the base url.
When I go to my swagger ui docs page, it loads fine, But it tries to get https://api.domain.com:80/ instead of https://api.domain.com/ and the site says "Can't read from server. It may not have the appropriate access-control-origin settings.".
Here is my code that enables SwaggerUI:
configSwag.EnableSwagger("docs/{apiVersion}/swagger", c =>
{
var baseDirectory = AppDomain.CurrentDomain.BaseDirectory;
var commentsFileName = Assembly.GetExecutingAssembly().GetName().Name + ".XML";
var commentsFile = Path.Combine(baseDirectory, commentsFileName);
c.Schemes(new string[]
{
"https"
});
c.SingleApiVersion("v1", "MechaChat v1 Docs");
c.IncludeXmlComments(commentsFile);
c.PrettyPrint();
}).EnableSwaggerUi("v1/docs/{*assetPath}", c =>
{
c.DocExpansion(DocExpansion.List);
c.SupportedSubmitMethods("GET", "POST");
});
What would I need to do for the docs to get https://api.domain.com/ instead of https://api.domain.com:80/?
I ended up using NSwag, Seems to be working perfectly fine now!

How to make a complex file download functionality with Blazor server?

I have a Blazor server application. I want to allow the user to download files but the content of the files needs to be built dynamically.
Basically the application shows reports to the user based on filters and etc. and I want the user to have the option to download whatever he is currently seeing.
I know I can make a "link button" that points to a Razor page that returns some sort of FileContentResult in its OnGet method but I have no idea how to pass any data to that so that the correct report file can be built.
I know there is an alternative that uses JavaScript but, as far as I know, it's more cumbersome and I'm not sure if it is any better.
I thought about doing a request to some sort of REST/WebAPI (which would allow me to pass arguments and stuff) but I cannot seem to get a WebAPI and Blazor Server projects run at the same time. The only partial success I've had is adding a WebAPI project to my Blazor Server solution and starting both. But then, while debugging, for some reason, both processes stop when I download the file.
Also the application must be hosted on Azure Web app and am not sure how feasible it would be to run both projects at the same time.
So, how can I make my Blazor Server allow the user to download a file but generate the file dynamically based on what the user is seeing on his browser?
The JavaScript alternative is very straightforward.
export function saveAsFile(filename, bytesBase64) {
if (navigator.msSaveBlob) {
//Download document in Edge browser
var data = window.atob(bytesBase64);
var bytes = new Uint8Array(data.length);
for (var i = 0; i < data.length; i++) {
bytes[i] = data.charCodeAt(i);
}
var blob = new Blob([bytes.buffer], { type: "application/octet-stream" });
navigator.msSaveBlob(blob, filename);
}
else {
var link = document.createElement('a');
link.download = filename;
link.href = "data:application/octet-stream;base64," + bytesBase64;
document.body.appendChild(link); // Needed for Firefox
link.click();
document.body.removeChild(link);
}
}
create a bytestream of whatever content you want to create and call this function through IJSInterop. I have used it in blazor server and it works well.
Please note : I found this piece of code online but I don't remember from where. I will be happy to give credit to the original author if someone knows the source.

PayPalAPIInterfaceServiceService and SetExpressCheckout config / credential information

I created an app in my business account on this page:
https://developer.paypal.com/developer/applications/
When I click on the app I created I see the following:
Sandbox account, Client ID, and Secret.
I am trying to call SetExpressCheckout… but the documentation is unclear and examples are all over the map.
Basically I’m seeing things like:
var request = new SetExpressCheckoutReq() { … };
var config = new Dictionary<string, string>()
{
{ "mode", "sandbox" }, // some variations of these values
{ "clientId", "fromAbovePage" },
{ "clientSecret", "fromAbovePage" },
{ "sandboxAccount", "fromAbovePage" },
{ "apiUsername", "IDontKnow" },
{ "apiPassword", "IDontKnow" },
{ "apiSignature", "IDontKnow" }
};
var service = new PayPalAPIInterfaceServiceService(config);
var response = service.SetExpressCheckout(request, new SignatureCredential(config["apiUsername"], config["apiPassword"], config["apiSignature"]));
Also, kind of weird that credentials go into both the PayPalAPIInterfaceServiceService and the actual SetExpressCheckout call.
What are (and where do I get) the correct values for the above config? (the request itself I have pretty much figured out)
Note: PayPal support told me that I need to use Reference Transactons in order to charge varying amounts over potentially varying times without subsequent user interaction, if that is relevant.
I would love to see examples of this with the most recent API's if anyone has that information as well.
Thank you.
SetExpressCheckout is a legacy NVP API
For sandbox, it uses credentials from the "Profile" of a sandbox account in https://www.paypal.com/signin?intent=developer&returnUri=https%3A%2F%2Fdeveloper.paypal.com%2Fdeveloper%2Faccounts%2F
For live, https://www.paypal.com/api
ClientID/Secret credentials are used for the current v2/checkout/orders REST API, which at the moment does not have any public documentation for vaulting or reference transactions; it is for one time payments. You can find information for a server-side integration at https://developer.paypal.com/docs/checkout/reference/server-integration/
If you are using this REST API integration, create two routes --one for 'Set Up Transaction' and one for 'Create Transaction'. Then pair them with this approval flow: https://developer.paypal.com/demo/checkout/#/pattern/server

Acumatica Web Services API Login

I am attempting to perform some basic integration using Acumatica's web services. Unfortunatly, I'm having problems logging in. According to their documentation, this process should look something like:
apitest.Screen context = new apitest.Screen();
context.CookieContainer = new System.Net.CookieContainer();
context.AllowAutoRedirect = true;
context.EnableDecompression = true;
context.Timeout = 1000000;
context.Url = "http://localhost/WebAPIVirtual/Soap/APITEST.asmx";
LoginResult result = context.Login("admin", "E618");
Simple enough. However, after creating and importing a WSDL file from Acumatica into Visual Studio, I found I don't have a Screen object. I do, however have a ScreenSoapClient object, which has a similar Login() method.
ScreenSoapClient context = new Acumatica.ScreenSoapClient("ScreenSoap");
LoginResult result = context.Login("username", "password");
That part works. In fact, the LoginResult give me a session ID. However, if I try to make any calls to the service, such as:
CR401000Content cr401000 = context.CR401000GetSchema();
I get an error: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> PX.Data.PXNotLoggedInException: Error #185: You are not currently logged in.
While the version of Acumatica we're using does appear to be slightly newer, I'm unsure why the Screen() object isn't available. Consequently, if I try a bad username/password, Login() does fail (as it should). From what I can the tell, the ScreenSoapClient class is using service model details from web.config, so it's getting the endpoint address and other details there.
Is there something I'm missing or doing wrong?
As i see, you use WCF to create your service reference.
So you should enable cookies in service binding:
var binding = new BasicHttpBinding()
{
AllowCookies = true
};
var address = new EndpointAddress("http://localhost/WebAPIVirtual/Soap/APITEST.asmx");
var c = new ServiceReference1.ScreenSoapClient(binding, address);
Or, you can use old asmx web service reference (http://msdn.microsoft.com/en-us/library/bb628649.aspx).
Then everything will be same as in Acumatica`s documentation.
As noted in a comment above, I was able to make contact with a representative from Acumatica. He had me remove then recreate the service references in our project and try again. That apparently did the trick and the "Error #185: You are not currently logged in" error went away.

JavaScript in C# ASP MVC issue

We have a web project that takes data from an MS SQL database and uses the Google Visualisation API to display these charts on the web view.
Recently we have added castle windsor so we can configure the application to different users with an XML file. Before we added this, the view worked fine, using the baked in parameters that were needed for this query. For some reason, when we send in the parameters from the XML files (Running with breakpoints shows that the parameters are being passed to the main controller action for the page) the data isn't being returned. here is some of the code for you.
JavaScript
<script type="text/javascript">
var csvDataUrl = '#Url.Action("TradeValuesDataCsv", "Dashboard")';
var jsonDataUrl = '#Url.Action("TradeValuesDataJson", "Dashboard")';
google.load("visualization", "1", { packages: ['table', 'corechart', 'gauge'] });
google.setOnLoadCallback(drawCharts);
drawCharts();
$("body").on({
ajaxStart: function () {
$(this).addClass("loading");
},
ajaxStop: function () {
$(this).removeClass("loading");
}
});
function drawCharts() {
var queryString = 'platform=' + $('#PlatformDropDownList').val();
queryString += '&startDate=' + $('#startDatePicker').val();
queryString += '&endDate=' + $('#endDatePicker').val();
queryString += '&model=' + $('#ModelDropDownList').val();
queryString += '&eventType=' + '#Model.EventType';
queryString += '&parameterName=' + '#Model.ParameterName';
$.ajax({
type: "POST",
url: jsonDataUrl,
data: queryString,
statusCode: {
200: function (r) {
drawToolbar(queryString);
drawTable(r);
drawChart(r);
},
400: function (r) {
},
500: function (r) {
}
}
});
}
Main controller Method for this page:
public ActionResult ActionResultName(EventTypeParameterNameEditModel model)
{
var viewModel = new EventTypeParameterNameViewModel(_queryMenuSpecific);
viewModel.EventType = model.EventType;
viewModel.ParameterName = model.ParameterName;
PopulateFilters(viewModel);
return this.View(viewModel);
}
Retrieve the JSON Data Controller Method:
public ActionResult ActionResultNameJson(EventTypeParameterNameEditModel filters)
{
List<CustomDataType> results = this.GetTradeValues(filters);
return this.Json(results, JsonRequestBehavior.AllowGet);
}
EDIT I have managed to find a solution, even if it is a rather messy one. I have some filters built into the page that allow the user to filter by device and by OS, and these were being populated on the page load with 'undefined'. I didn't spot this first time round with NHProf Running, but this wasn't happening when the page loaded before we configured the input to be from XML. I will add this as an answer and accept it and close the question. Thanks everyone for your attempts to help. Starting to really like this community. Perfect place to find help as a Graduate Developer.
Yep. I'm not a Razor syntax expert but I think these property references are probably your problem. I suspect razor is going to tend to avoid asserting itself inside strings being used in statements with properties in JS contexts. Or you could try implementing as getter functions which would probably work. Otherwise an # and a . in a string could easily lead to confusing mixups with email addresses when it's not an obvious method call:
queryString += '&eventType=' + '#Model.EventType';
queryString += '&parameterName=' + '#Model.ParameterName';
As a general rule in any server to client-side scenario, my advice is to confine JavaScript direct from the back end to JSON objects only. That way you have more granular control over what's going on on both sides of the http request wall and your client-side devs don't have to figure where stuff is getting built if there's a short-term need to quickly modify. In general, don't build behavioral code with other code if you can avoid it.
I couldn't convince my .net MVC boss at first but he slowly came around to the idea on his own months later.
We also store a URL base path along with some other context-shifting params in a standard JSON object that loads on every page so the JS devs can add these things linked JS files rather than have to work with JS in the HTML (I don't recall why but document.location wasn't always going to work).
Lastly, try to keep the JS out of the HTML. Link it. It seems like a pain from a procedural POV but trust me. It makes life much easier when you are juggling 3 major concerns as one ball each rather than all in the same jumbled HTML/template mess.
It turned out that the problem was not in my Javascript. I have some filters in there that allow the user to filter the results my model and operating system and date and what not. These were being automatically populated on page load with 'undefined' which is not an option in the database. I added something to catch that in the call to the query and it seemed to solve the problem.

Categories