I'm currently packaging some files and pushing them on to a NuGet feed on one of our servers using the command line tool.
Rather than using the command line tool I've set up a project using Nuget.Core and successfully managed to create a package. I'm now trying to push that package from my machine on to the NuGet feed via NuGet.Core.
Using the command line tool that looks like this (and I got this working too):
nuget.exe push package.nupkg -ApiKey MYAPIKEY -Source http://nugetpackagefeedaddress
What I want to do is replicate the push function using NuGet.Core. The closest I've managed to get so far is getting two repositories from the PackageRepositoryFactory, one for the local machine path and one for the package feed and then retrieve the package from the local one and try and add it to the feed like this:
var remoteRepo = PackageRepositoryFactory.Default.CreateRepository("myNugetPackagefeedUrl");
var localRepo = PackageRepositoryFactory.Default.CreateRepository(#"locationOfLocalPackage");
var package = localRepo.FindPackagesById("packageId").First();
remoteRepo.AddPackage(package);
This code results in a NotSupportedException stating the 'Specified method is not supported'
Is it possible to push packages using NuGet.Core? and am I anywhere close to it with the above code?
Note: I'm aware I could wrap the call to nuget.exe and call that from .NET but I'd either want to package and push from NuGet.Core or do both by wrapping the calls to nuget.exe rather than half and half
So it turns out I was looking in the wrong place entirely. The method I wanted was PushPackage on PackageServer
The code now looks like this
var localRepo = PackageRepositoryFactory.Default.CreateRepository(#"locationOfLocalPackage");
var package = localRepo.FindPackagesById("packageId").First();
var packageFile = new FileInfo(#"packagePath");
var size = packageFile .Length;
var ps = new PackageServer("http://nugetpackagefeedaddress", "userAgent");
ps.PushPackage("MYAPIKEY", package, size, 1800, false);
I'm not sure what the best values for the userAgent parameter when newing up the PackageServer would be. Similarly if anyone has any advice on what the timeout or disableBuffering parameters want to be, let me know (for example is the timeout in ms, seconds etc.)
The PushPackage method signature looks like this:
void PackageServer.PushPackage(string apiKey, IPackage package, long packageSize, int timeout, bool disableBuffering)
In addition to rh072005's answer:
Timeout is in milliseconds, be careful.
Uri is tricky. For NuGet.Server implementation PushPackage uri should be "http://nugetserveraddress" while for IPackageRepository objects Uri becomes "http://nugetserveraddress/nuget"
For large packages you will get (404) Not Found if IIS server is not configured to accept large requests.
Related
I am trying to add update checking to my app following this https://learn.microsoft.com/en-us/windows/uwp/packaging/self-install-package-updates#mandatory-package-updates.
However, I found that when I call GetAppAndOptionalStorePackageUpdatesAsync() on my developer machine, it always return an item. The Microsoft Document sample looks like when there is no update, the returned list is empty which does not consist with what I see. So I tried to compare the version number but the number seem incorrect. (see the code below)
The version number returned is always the same as my manifest setting instead of the version from the Store. (If I lower the manifest version number the returned version number will also change with it so its not the number from the store)
Even if I run an older release build on another PC which is also having a lower version than the Store version, it still could not detect update to be available. Although I could not debug it, I think it is also failed to get the real version number from the store.
Any idea what I did wrong?
public async Task<bool> IsUpdateAvailable()
{
var updates = await context.GetAppAndOptionalStorePackageUpdatesAsync();
var packageId = Windows.ApplicationModel.Package.Current.Id;
var currentVersion = packageId.Version;
//var versionString = string.Format("{0}.{1}.{2}.{3}", version.Major, version.Minor, version.Build, version.Revision);
//Debug.WriteLine("Current Version: " + versionString);
foreach (var item in updates)
{
var onlineVersion = item.Package.Id.Version;
if (onlineVersion.Major > currentVersion.Major)
return true;
if (onlineVersion.Minor > currentVersion.Minor)
return true;
if (onlineVersion.Revision > currentVersion.Revision)
return true;
if (onlineVersion.Build > currentVersion.Build)
return true;
}
#if DEBUG
Debug.WriteLine("No update is available.");
#endif
return false;
}
The GetAppAndOptionalStorePackageUpdatesAsync Method works a little bit different from the way you think it works. It doesn't return information of the update package but returns the information of the current package which could be updated. You could check the document description --Gets the collection of packages for the current app that have updates available for download from the Microsoft Store.
If you did have an update for your app, then the behavior you got is expected. For example, let's say you have A package(v1.0.0.1) installed now, and you have A package(v1.0.1.1) for update in the store.
When you calling this method, it returns the A package(v1.0.0.1) because this package has available updates. It won't return A package(v1.0.1.1).
Update:
I tried to test this in packaged apps since there are some differences between packaged apps(store downloaded, sideload) and non-packaged apps(debug, running from VS).
This is a simple test so I choose to test with sideload package. I created a new certificate that has the same publisher name as the store. Then I use this new certificate to package the sideload app to make sure the package ID is the same as the store app. This is important because if your package ID is different from the store app, the StoreContext object will throw an exception. Also, I checked the app version to make sure it is the same as the latest version in the store.
Then I installed the sideload app and call GetAppAndOptionalStorePackageUpdatesAsync Method to check the update. The method return null. So it works correctly when the app version is the same.
After this, I tried to decrease the version of the app and test again. This time, the method returns that there is an update available.
You could directly create a new package that contains the GetAppAndOptionalStorePackageUpdatesAsync Method and upload it to the store. When the app is able to download, you could get it and test it again.
I have a task, that requires me of creating microservice, that uses puppeteersharp in order to make page screenshots. In order to do so, I use ASP.net core web api project template. In Startup.cs file I launch Puppeteersharp. Below is the code for that:
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
Browser puppeteerBrowser = null;
Task.Run(async () => puppeteerBrowser = await LaunchPuppeteerBrowserAsync());
services.AddSingleton(puppeteerBrowser);
}
public static async Task<Browser> LaunchPuppeteerBrowserAsync()
{
Console.WriteLine("Starting to launch CHROMIUM...");
// Uncomment to let puppeteer download chromium
//await new BrowserFetcher().DownloadAsync(BrowserFetcher.DefaultRevision);
var browser = await Puppeteer.LaunchAsync(new LaunchOptions
{
// Comment to let puppeteer run downloaded chromium
ExecutablePath = "/usr/bin/chromium",
Args = new[]{ "--no-sandbox", "--disable-gpu-rasterization", "--disable-remote-extensions" },
Headless = true,
// Didn't help
//IgnoredDefaultArgs = new []{ "--enable-gpu-rasterization", "--enable-remote-extensions", "--load-extension=" }
});
Console.WriteLine("CHROMIUM launched successfully");
return browser;
}
Further, the microsevice is required to run in the container inside our server. I created dockerfile, that uses standard docker-image for .Net Core applications from dockerhub, created by Microsoft. In order to Illustrate my problem, I created test application and uploaded it to github repo, you can find it here:
ASP.net Core Web Api project
There is a dockerfile, which can be used to build image like so:
*Please note, that I'm using Windows 10, and for test purposes I installed Docker onto my machine.
So, as you might know, docker images have layered structure. In order to take advantage of that feature, I decided to add the download and installation of chromium browser as 1 of the steps in dockerfile. That way, when request comes to server for the first time since launch, puppeteersharp library will not have to download the browser first and then do the job it required to do, and will use the binary (or whatever it is called in linux, I mean the application launching file; in windows that would be .exe).
You can see, that I provide executable path to Puppeteersharp browser when launching new browser in the code example I provided earlier (ExecutablePath = "/usr/bin/chromium").
The container can be started the following way:
And finally, I can describe my problem. First, using CMD command: docker exec -it e95e6d9fca63 bash I tune into container's bash.
There I run this:
In order to "ps" command to work. I also install "less" package (apt-get install less).
Then I run ps aux | less to show running processes inside the container. I get the following result, which show the command line parameters for chromium process, how it was launched. I underlined the ones, which bother me:
/usr/lib/chromium/chromium --show-component-extension-options --enable-gpu-rasterization --no-default-browser-check --disable-pings --media-router=0 --enable-remote-extensions --load-extension= --disable-background-networking --enable-features=NetworkService,NetworkServiceInProcess --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-breakpad --disable-client-side-phishing-detection --disable-component-extensions-with-background-pages --disable-default-apps --disable-dev-shm-usage --disable-extensions --disable-features=TranslateUI,BlinkGenPropertyTrees --disable-hang-monitor --disable-ipc-flooding-protection --disable-popup-blocking --disable-prompt-on-repost --disable-renderer-backgrounding --disable-sync --force-color-profile=srgb --metrics-recording-only --no-first-run --enable-automation --password-store=basic --use-mock-keychain --headless --hide-scrollbars --mute-audio about:blank --no-sandbox --disable-gpu-rasterization --disable-remote-extensions --remote-debugging-port=0 --user-data-dir=/tmp/kgkyt1d1.bqe
The latter --disable-gpu-rasterization --disable-remote-extensions arguments are the ones I set manually in Args property of LaunchOptions - check out the first code example I provided.
I also tried to use "IgnoredDefaultArgs" property of LaunchOptions, because according to documenation of the library, the options set there will be ignored. You can also see that happening in source code.
Launcher code - creates new chromium process, which has "PrepareChromiumArgs" method
The parameters put into "IgnoredDefaultArgs" array get deleted from the result array. But that didn't help, the --enable-gpu-rasterization --enable-remote-extensions --load-extension= are still there.
The strange thing is, that when I let the Puppeteersharp library to fetch the browser itself, and comment out "ExecutablePath" property from "LaunghOptions" and start it, there are no such redundant parameters. I have checked it.
My guess is that redundant arguments occur from the executable I downloaded when it gets started by the library. I mean in somewhat similar fashion that Windows allows you to add to file properties some extra command line arguments. But is this possible in Linux?
Can anyone please help?
I'm developing a Visual Studio extension (VSIX project) that needs to manage Nuget packages of a given project.
I'm already using the IVsPackageInstaller service as documented here but this is limited and I need more features (for example get the latest version number of a given package).
I searched but didn't find anything on how to programmatically interact with the Visual Studio Package Manager so I decided to go for the Nuget API directly.
I send HTTP requests to the Nuget API using the WebRequest class (because we can't use HttpClient in a VSIX project) but I'm hitting a problem: the requests are going to a private Nuget feed that needs authentication! (hosted on Azure DevOps)
I used Fiddler to check the HTTP requests sent to our Azure DevOps server. I see a POST request going to https://app.vssps.visualstudio.com/_apis/Token/SessionTokens with a token in response but this is not the Token I'm looking for.
The token passed to the Nuget API is a Basic token that comes from I don't know where. I couldn't find this token anywhere in the HTTP responses I caught.
I can also see that some responses to our Azure DevOps server contain some headers like this (I changed the GUID)
WWW-Authenticate: Bearer authorization_uri=https://login.windows.net/ce372fcc-5e17-490b-ad99-47565dac8a84
I can find this GUID back in the %userprofile%\AppData\Local\.IdentityService\AccountStore.json file, there is definitely something going on here. And the SessionTokens.json file in the same folder looks reeeaaally interesting too but it's encrypted...
I also tried to dig in the Registry to see if I can find interesting information for example at the path specified in Simon's comment but it seems VS2017 doesn't store the token there anymore.
I also loaded the privateregistry.bin file (aka the Visual Studio Settings Store) and searched everywhere but couldn't find anything.
So instead of trying to reverse engineer Visual Studio I wanted to access its Credential Provider directly. I tried to access to several services and classes
var componentModel = await ServiceProvider.GetGlobalServiceAsync(typeof(SComponentModel)) as IComponentModel;
var credentialProvider = componentModel.GetService<IVsCredentialProvider>();
var credentialServiceProvider = componentModel.GetService<ICredentialServiceProvider>();
var defaultCredentialServiceProvider = new DefaultVSCredentialServiceProvider();
But none of them are working (return null or Exception).
I wandered in the NuGet.PackageManagement.VisualStudio project on Github but couldn't find my answer.
There are also many Nuget packages like NuGet.PackageManagement.VisualStudio, Microsoft.VisualStudio.Services.Release.Client, Microsoft.VisualStudio.Services.ExtensionManagement.WebApi, Microsoft.VisualStudio.Services.InteractiveClient just to name a few but honestly I don't know if what I'm looking for is there...
So how to access the Nuget credentials used by Visual Studio?
I take any solution that gives me access to all the reading Nuget features, for example programmatically use the Visual Studio Package Management, or decrypt this SessionTokens.json file or access the Visual Studio Credential Provider.
The less hacky is the answer, the better it is of couse.
At this point you probably already guessed, I don't want to store the username and password somewhere myself. I need to create a user-friendly VS extension, that's why I want to retrieve and use the credentials already saved in Visual Studio by the users.
Thank you so much if you can solve this problem.
NuGet Client SDK
Thanks a lot to Simon who pointed me in the direction of NuGet.Client.
The only documentation from Microsoft is linking a 2016 blog post from Dave Glick but they also give a nice note:
These blog posts were written shortly after the 3.4.3 version of the NuGet client SDK packages were released. Newer versions of the packages may be incompatible with the information in the blog posts.
Alright, then I guess we will do with Dave's blog...
You should install two packages: NuGet.Client and Nuget.Protocol
Then here is the code for example to get the last version of a package:
using NuGet.Configuration;
using NuGet.Protocol;
using NuGet.Protocol.Core.Types;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
namespace MyProject
{
public class NugetHelper
{
public async Task<string> GetLatestVersionNumberFromNugetFeedAsync(NugetPackage package)
{
try
{
Logger logger = new Logger(); //Just a class implementing the Nuget.Common.ILogger interface
List<Lazy<INuGetResourceProvider>> providers = new List<Lazy<INuGetResourceProvider>>();
providers.AddRange(Repository.Provider.GetCoreV3()); // Add v3 API support
PackageSource packageSource = new PackageSource(package.Source.ToString());
SourceRepository sourceRepository = new SourceRepository(packageSource, providers);
PackageMetadataResource packageMetadataResource = await sourceRepository.GetResourceAsync<PackageMetadataResource>();
var searchMetadata = await packageMetadataResource.GetMetadataAsync(package.Name, false, false, new SourceCacheContext(), logger, new CancellationToken());
var versionNumber = searchMetadata.FirstOrDefault().Identity.Version.OriginalVersion;
return versionNumber;
}
catch (Exception ex)
{
return null;
}
}
}
public class NugetPackage
{
public string Name { get; set; }
public string Version { get; set; }
public string MinimumVersion { get; set; }
public Uri Source { get; set; }
}
}
Physical Token Storage Place
I tried to reverse engineer where Visual Studio was storing the token used in HTTP requests to the Nuget API.
I exported to text files all the different hives of the Registry including the Visual Studio Settings Store (privateregistry.bin).
Then added a brand new Nuget feed in Visual Studio, got the login popup as expected so I logged in.
Finally I exported all the hives again to text files and compared them all with the files before authentication.
I found nothing interesting in the VS Settings Store.
The only interesting changes were
[HKEY_CURRENT_USER\Software\Microsoft\VSCommon\ConnectedUser\IdeUserV2]
#="0746fb8e-4bc2-4ee5-b804-0084af725deb"
"AccountsRoaming_LastAccountsSettingVersion"=dword:0000025b
[HKEY_CURRENT_USER\Software\Microsoft\VsHub\ServiceModules\Settings\PerHubName\vshub\ConnectedUser\IdeUserV2\Cache]
"LastProfileVersion"=dword:10b8260a
and
[HKEY_USERS\S-1-5-21-1787888774-1556370510-3519259403-1001\Software\Microsoft\VSCommon\Keychain]
"TokenStorageNameSpace"="VisualStudio"
[HKEY_USERS\S-1-5-21-1787888774-1556370510-3519259403-1001\Software\Microsoft\VsHub\ServiceModules\Settings\PerHubName\vshub\ConnectedUser\IdeUserV2\Cache]
"LastProfileVersion"=dword:10b8260a
Maybe somewhere, there is the key to these encrypted SessionTokens.json and IdentityServiceAdalCache.cache files but having the data stored in hexadecimal makes things even harder.
I've to give up on this, almost no chance I could reverse engineer the authentication system.
Visual Studio credentials provider
The NuGet Client SDK solves my issue but doesn't actually answer to this SO question.
As I said, I tried to call
var componentModel = await ServiceProvider.GetGlobalServiceAsync(typeof(SComponentModel)) as IComponentModel;
componentModel.GetService<ICredentialServiceProvider>()
But this didn't work, so if anybody knows how to access the Visual Studio credentials provider, I would be really glad to know the answer.
I'm fighting with Google Docs for setting up Cloud PubSub with .NET using a PubSub emulator.
https://cloud.google.com/dotnet/docs/getting-started/using-pub-sub
https://cloud.google.com/pubsub/docs/publisher
https://cloud.google.com/pubsub/docs/emulator
Coming from a Rails background, I'm tasked to implement Cloud PubSub for a .NET product, running our google cloud on .NET Core, to enable it to publish.
Google::Cloud::Pubsub.new(project: project_id, emulator_host: emulator_host)
From the documentation using .NET, I keep coming back to the following:
PublisherServiceApiClient publisherClient = PublisherServiceApiClient.Create();
PublisherClient publisher = PublisherClient.Create(...)
However, the library used from the docs Google.Cloud.PubSub.V1 -Pre
does not contain the definition.
'PublisherClient' does not contain a definition for 'Create'.
Instead, I get CreateAsync that takes in TopicName, PublisherClient.ClientCreationSettings and PublisherClient.Settings.
https://googleapis.github.io/google-cloud-dotnet/docs/Google.Cloud.PubSub.V1/api/Google.Cloud.PubSub.V1.PublisherClient.html
I noticed that PublisherServiceApiClient can take in a Channel, but I'm confused on how to get this going.
To conclude with an actual question, how does one currently implement Cloud PubSub with .NET for in cloud and then locally with emulator? Adding to that, am I using the wrong library or the wrong docs?
Any suggestions, pointers or piece of advice would be truly appreciated.
I managed a solution that I am happy with.
Instead of using the PublisherClient, I went with using the PublisherServiceApiClient alone.
emulatorAddr = Environment.GetEnvironmentVariable("PUBSUB_EMULATOR_HOST");
if (emulatorAddr != null)
{
channel = new Channel(emulatorAddr, ChannelCredentials.Insecure);
pub = PublisherServiceApiClient.Create(channel);
}
else
{
pub = PublisherServiceApiClient.Create();
}
Which meant that publishing was slightly more involved then sending string to the PublisherClient, but overall not so bad.
PubsubMessage msg = new PubsubMessage
{
Data = ByteString.CopyFromUtf8(JsonConvert.SerializeObject(payload))
};
pub.PublishAsync(topic, new[]{ msg });
If the project is running in a Google Compute Engine, it will have default credentials. Otherwise, wether you're running an emulator locally or in docker you can define PUBSUB_EMULATOR_HOST.
What really helped was this https://googleapis.github.io/google-cloud-dotnet/docs/Google.Cloud.PubSub.V1/index.html
To make the PublisherClient connect to a local emulator, you need to pass custom ServiceEndpoint and ChannelCredentials to CreateAsync:
var serviceEndpoint = new ServiceEndpoint(theEmulatorHost, theEmulatorPort);
var publisherClient = await PublisherClient.CreateAsync(
topicName,
new PublisherClient.ClientCreationSettings(credentials: ChannelCredentials.Insecure, serviceEndpoint: serviceEndpoint));
To switch to the real PubSub, just leave away the ClientCreationSettings.
You can use the EmulatorDetection property on the ClientCreationSettings using extension method .WithEmulatorDetection(EmulatorDetection.EmulatorOrProduction). Like this:
PublisherClient publisher = await PublisherClient.CreateAsync(
topicName,
new PublisherClient.ClientCreationSettings()
.WithEmulatorDetection(EmulatorDetection.EmulatorOrProduction));
This will work if you have the following environment variable for the local emulator endpoint: PUBSUB_EMULATOR_HOST=localhost:8085
(If you use Visual Studio you might have to restart VS for the environment variable to be detected)
In windows I had problems using the set PUBSUB_EMULATOR_HOST=localhost:8085 command, so I ended up adding it manually.
Details here: https://cloud.google.com/pubsub/docs/emulator
Extra tip: you can add topics directly to API using curl: curl -X PUT http://localhost:8085/v1/projects/my-project-name/topics/my-topic
I wonder is there any way to check service bus topic empty or not
I tried with nuget WindowsAzure.ServiceBus and below sample code.
In this nuget I do not get ITopicClient :(
var topicClient = TopicClient(); // we can not create object
var topicPeek = topicClient.Peek();
TopicDescription topicDescription = new TopicDescription(topicName);
var topicSize = topicDescription.SizeInBytes;
any way to do so?
With the WindowsAzure.ServiceBus package, you can create instances of Service Bus clients by using MessagingFactory.Create to get a reference to a MessagingFactory. Once you have one of those, you can call CreateTopicClient to get a TopicClient instance.
(Note that there's also a newer package called Microsoft.Azure.ServiceBus that's a bit limited in functionality, but it supports .NET Core. If you use that package, the class hierarchy is somewhat different, and you can create instances of the clients directly.)