I am using azure cosmos db with .net core 2.1 application. I am using gremlin driver with this. It's working fine but after every few days it start throwing socket exception on server and we have to recycle IIS pool. Average per day hits are 10000.
Now we are using default gateway mode. Should we have to switch to direct mode as it might be a firewall issue ?
Here is the implementation:
private DocumentClient GetDocumentClient( CosmosDbConnectionOptions configuration)
{
_documentClient = new DocumentClient(
new Uri(configuration.Endpoint),
configuration.AuthKey,
new ConnectionPolicy());
//create database if not exists
_documentClient.CreateDatabaseIfNotExistsAsync(new Database { Id = configuration.Database });
return _documentClient;
}
and in startup.cs:
services.AddSingleton(x => GetDocumentClient(cosmosDBConfig));
and here is how we are communicating with cosmos db:
private DocumentClient _documentClient;
private DocumentCollection _documentCollection;
private CosmosDbConnectionOptions _cosmosDBConfig;
public DocumentCollectionFactory(DocumentClient documentClient, CosmosDbConnectionOptions cosmosDBConfig)
{
_documentClient = documentClient;
_cosmosDBConfig = cosmosDBConfig;
}
public async Task<DocumentCollection> GetProfileCollectionAsync()
{
if (_documentCollection == null)
{
_documentCollection = await _documentClient.CreateDocumentCollectionIfNotExistsAsync(
UriFactory.CreateDatabaseUri(_cosmosDBConfig.Database),
new DocumentCollection { Id = _cosmosDBConfig.Collection },
new RequestOptions { OfferThroughput = _cosmosDBConfig.Throughput });
return _documentCollection;
}
return _documentCollection;
}
and then:
public async Task CreateProfile(Profile profile)
{
var graphCollection = await _graphCollection.GetProfileCollectionAsync();
var createQuery = GetCreateQuery(profile);
IDocumentQuery<dynamic> query = _documentClient.CreateGremlinQuery<dynamic>(graphCollection, createQuery);
if(query.HasMoreResults)
{
await query.ExecuteNextAsync();
}
}
I'm assuming that for communication with CosmosDB you are using HttpClient. The application should share a single instance of HttpClient.
Every time you make a connection after HttpClient disposal there are still a bunch of connections in the state of TIME_WAIT. This means that the connection was closed on one side ( OS ) but it is in "waiting for additional packets" state.
By default, Windows may hold this connection in this state for 240 seconds. There is a limit to how quickly OS can open new sockets. All this may lead to System.Net.Sockets.SocketException exception.
Very good article that explains in details why and how this problem appears digging into TCP diagram and explaining with more details.
UPDATED
Possible solution.
You are using the default ConnectionPolicy object. That object has a property called IdleTcpConnectionTimeout which controls the amount of idle time after which unused connections are closed. By default, idle connections are kept open indefinitely. The value must be greater than or equal to 10 minutes.
So the code could look like:
private DocumentClient GetDocumentClient( CosmosDbConnectionOptions configuration)
{
_documentClient = new DocumentClient(
new Uri(configuration.Endpoint),
configuration.AuthKey,
new ConnectionPolicy() {
IdleTcpConnectionTimeout = new TimeSpan(0,0,10,0)
});
//create database if not exists
_documentClient.CreateDatabaseIfNotExistsAsync(new Database { Id = configuration.Database });
return _documentClient;
}
Here is a link to ConnectionPolicy Class documentation
Related
SignalR version: SignalR 2.4.1
.Net Framework version: 4.8 (I am not using .Net Core)
SignalR transport: websockets
I am developing a background service for SignalR (PresenceMonitor) where I need to detect whether a connection with specific clientid is alive or not.
I am using the following code for Presence Monitor to start with what I want to achieve:
using System;
using System.Data.Entity.SqlServer;
using System.Diagnostics;
using System.Linq;
using System.Threading;
using Microsoft.AspNet.SignalR.Transports;
namespace UserPresence
{
/// <summary>
/// This class keeps track of connections that the <see cref="UserTrackingHub"/>
/// has seen. It uses a time based system to verify if connections are *actually* still online.
/// Using this class combined with the connection events SignalR raises will ensure
/// that your database will always be in sync with what SignalR is seeing.
/// </summary>
public class PresenceMonitor
{
private readonly ITransportHeartbeat _heartbeat;
private Timer _timer;
// How often we plan to check if the connections in our store are valid
private readonly TimeSpan _presenceCheckInterval = TimeSpan.FromSeconds(10);
// How many periods need pass without an update to consider a connection invalid
private const int periodsBeforeConsideringZombie = 3;
// The number of seconds that have to pass to consider a connection invalid.
private readonly int _zombieThreshold;
public PresenceMonitor(ITransportHeartbeat heartbeat)
{
_heartbeat = heartbeat;
_zombieThreshold = (int)_presenceCheckInterval.TotalSeconds * periodsBeforeConsideringZombie;
}
public void StartMonitoring()
{
if (_timer == null)
{
_timer = new Timer(_ =>
{
try
{
Check();
}
catch (Exception ex)
{
// Don't throw on background threads, it'll kill the entire process
Trace.TraceError(ex.Message);
}
},
null,
TimeSpan.Zero,
_presenceCheckInterval);
}
}
private void Check()
{
using (var db = new UserContext())
{
// Get all connections on this node and update the activity
foreach (var trackedConnection in _heartbeat.GetConnections())
{
if (!trackedConnection.IsAlive)
{
continue;
}
Connection connection = db.Connections.Find(trackedConnection.ConnectionId);
// Update the client's last activity
if (connection != null)
{
connection.LastActivity = DateTimeOffset.UtcNow;
}
else
{
// We have a connection that isn't tracked in our DB!
// This should *NEVER* happen
// Debugger.Launch();
}
}
// Now check all db connections to see if there's any zombies
// Remove all connections that haven't been updated based on our threshold
var zombies = db.Connections.Where(c =>
SqlFunctions.DateDiff("ss", c.LastActivity, DateTimeOffset.UtcNow) >= _zombieThreshold);
// We're doing ToList() since there's no MARS support on azure
foreach (var connection in zombies.ToList())
{
db.Connections.Remove(connection);
}
db.SaveChanges();
}
}
}
}
The issue I am facing is here:
// Get all connections on this node and update the activity
foreach (var trackedConnection in _heartbeat.GetConnections())
{
Scanning all the connections when there are large number of connections is deeply affecting the performance of my application and is giving lot of CPU spikes.
In my database, I already have the mapping for connection ids per user. Based on that I already a have field in my cache per user whether that user has any connection in db or not. Those mappings are are already cached. I would scan each of those mappings and would check whether the connection (connection id) for that specific user is is alive or not. I tried looking for ITransportHeartbeat Interface for the same but unfortunately, that interface gives us just these four methods:
//
// Summary:
// Manages tracking the state of connections.
public interface ITransportHeartbeat
{
//
// Summary:
// Adds a new connection to the list of tracked connections.
//
// Parameters:
// connection:
// The connection to be added.
//
// Returns:
// The connection it replaced, if any.
ITrackingConnection AddOrUpdateConnection(ITrackingConnection connection);
//
// Summary:
// Gets a list of connections being tracked.
//
// Returns:
// A list of connections.
IList<ITrackingConnection> GetConnections();
//
// Summary:
// Marks an existing connection as active.
//
// Parameters:
// connection:
// The connection to mark.
void MarkConnection(ITrackingConnection connection);
//
// Summary:
// Removes a connection from the list of tracked connections.
//
// Parameters:
// connection:
// The connection to remove.
void RemoveConnection(ITrackingConnection connection);
}
Ther is no method where I can get the state of connection by connectionid. Is there any way where I can get a specific connection information without scannig all the connetcions. I am aware of the traditional way to get that which could be using this: _heartbeat.GetConnections().Select(b => b.ConnectionId). But that code also will scan all the connections.
I am aware of OnDisconnected event also which we could use on a Hub itself but the OnDisconnected even doesn't guarantee to fire always (browser can close, internet shut down, site restart).
Is there any code which I could hook in my Hub itself to detect the ping done by the Heartbeat API? I could store the last pings per connection (kind of denormalize the way of detecting last ping) and can detect whether that connection is dead or not?
SignalR for .Net Core has something like that:
var heartbeat = Context.Features.Get<IConnectionHeartbeatFeature>();
heartbeat.OnHeartBeat(MyAction,
but I am looking for a similar feature like that in SignalR for .NET Framework.
I have implemented the following version of ServiceStack .net Core Redis library:
ServiceStack.Redis.Core 5.9.2
I am using the library to access a Redis cache I have created to persist values for my AWS Serverless Application using .NET Core 3.1. I have paid for a commercial license for ServiceStack Redis.
Periodically and without warning, my application captures the following error when trying to create a Redis client:
Exception: System.Exception: No master found in: redis-cluster-api-prd-lcs.in-xxxxxxxx:6379
at ServiceStack.Redis.RedisResolver.CreateRedisClient(RedisEndpoint config, Boolean master) in C:\BuildAgent\work\b2a0bfe2b1c9a118\src\ServiceStack.Redis\RedisResolver.cs:line 116
at ServiceStack.Redis.RedisResolver.CreateMasterClient(Int32 desiredIndex) in C:\BuildAgent\work\b2a0bfe2b1c9a118\src\ServiceStack.Redis\RedisResolver.cs:line 142
at ServiceStack.Redis.RedisManagerPool.GetClient() in C:\BuildAgent\work\b2a0bfe2b1c9a118\src\ServiceStack.Redis\RedisManagerPool.cs:line 174
at LCSApi.UtilityCommand.Cache.IsCacheValueExists(String cacheKey, RedisManagerPool pool) in D:\a\1\s\testapi\Utility.cs:line 167
at LCSApi.Functions.LcsConfigurationSweeper(ILambdaContext context) in D:\a\1\s\testapi\Function.cs:line 2028
Exception: System.Exception: No master found in: redis-cluster-api-prd-lcs.in
Other times, the same code works fine. My implementation is quite simple:
private readonly RedisManagerPool _redisClient;
_redisClient = new RedisManagerPool(Environment.GetEnvironmentVariable("CACHE_URL") + ":" +
Environment.GetEnvironmentVariable("CACHE_PORT"));
public static T GetCacheValue<T>(string cacheKey, RedisManagerPool pool)
{
T cacheValue;
try
{
//StackExchange.Redis.IDatabase cache = Functions._redisConnect.GetDatabase();
//string value = cache.StringGet(cacheKey);
//cacheValue = (T)Convert.ChangeType(value, typeof(T));
using (var client = pool.GetClient())
{
client.RetryCount = Convert.ToInt32(Environment.GetEnvironmentVariable("CACHE_RETRY_COUNT"));
client.RetryTimeout = Convert.ToInt32(Environment.GetEnvironmentVariable("CACHE_RETRY_TIMEOUT"));
cacheValue = client.Get<T>(cacheKey);
}
}
catch (Exception ex)
{
//Console.WriteLine($"[CACHE_EXCEPTION] {ex.ToString()}");
cacheValue = GetParameterSSMFallback<T>(cacheKey);
//Console.WriteLine($"[CACHE_EXCEPTION] Fallback SSM parameter --> {cacheValue}");
}
return cacheValue;
}
It happens enough I've had to write a 'fallback' routine to fetch the value from the AWS Parameter Store where it originates from. Not ideal. Here is the Redis configuration:
I can find next to nothing about this error online anywhere. I've tried to sign up to the ServiceStack forums without success, it won't let me sign up for some reason, even though I have a commercial license. Can anyone assist?
The error is due to the network instance not connecting to a master instance as identified by the redis ROLE command. This could be happening during an ElastiCache failover where it eventually appears that the master instance will return under the original DNS name:
Amazon ElastiCache for Redis will repair the node by acquiring new service resources, and will then redirect the node's existing DNS name to point to the new service resources. Thus, the DNS name for a Redis node remains constant, but the IP address of a Redis node can change over time.
ServiceStack.Redis will try to connect to a master instance using all specified master connections (typically only 1). If it fails to connect to a master instance it has to give up as the client expects to perform operations on the read/write master instance.
If it's expected the master instance will return under the same DNS name we can use a custom IRedisResolver to continually retry connecting on the same connection for a master instance for a specified period of time, e.g:
public class ElasticCacheRedisResolver : RedisResolver
{
public override RedisClient CreateRedisClient(RedisEndpoint config, bool master)
{
if (master)
{
//ElastiCache Redis will failover & retain same DNS for master
var firstAttempt = DateTime.UtcNow;
Exception firstEx = null;
var retryTimeSpan = TimeSpan.FromMilliseconds(config.RetryTimeout);
var i = 0;
while (DateTime.UtcNow - firstAttempt < retryTimeSpan) {
try
{
var client = base.CreateRedisClient(config, master:true);
return client;
}
catch (Exception ex)
{
firstEx ??= ex;
ExecUtils.SleepBackOffMultiplier(++i);
}
}
throw new TimeoutException(
$"Could not resolve master within {config.RetryTimeout}ms RetryTimeout", firstEx);
}
return base.CreateRedisClient(config, master:false);
}
}
Which you configure with your Redis Client Manager, e.g:
private static readonly RedisManagerPool redisManager;
redisManager = new RedisManagerPool(...) {
RedisResolver = new ElasticCacheRedisResolver()
};
Note: there you should use only 1 shared instance of Redis Client Managers like RedisManagerPool so the share the same connection pool. If the class containing the redisManager is not a singleton it should be assigned to a static field ensuring the same singleton instance is used to retrieve clients.
I use MongoDB drivers to connect to the database. When my form loads, I want to set up connection and to check whether it is ok or not. I do it like this:
var connectionString = "mongodb://localhost";
var client = new MongoClient(connectionString);
var server = client.GetServer();
var database = server.GetDatabase("reestr");
But I do not know how to check connection. I tried to overlap this code with try-catch, but to no avail. Even if I make an incorrect connectionString, I still can not get any error message.
To ping the server with the new 3.0 driver its:
var database = client.GetDatabase("YourDbHere");
database.RunCommandAsync((Command<BsonDocument>)"{ping:1}")
.Wait();
There's a ping method for that:
var connectionString = "mongodb://localhost";
var client = new MongoClient(connectionString);
var server = client.GetServer();
server.Ping();
full example for 2.4.3 - where "client.GetServer()" isn't available.
based on "Paul Keister" answer.
client = new MongoClient("mongodb://localhost");
database = client.GetDatabase(mongoDbStr);
bool isMongoLive = database.RunCommandAsync((Command<BsonDocument>)"{ping:1}").Wait(1000);
if(isMongoLive)
{
// connected
}
else
{
// couldn't connect
}
I've had the same question as the OP, and tried every and each solution I was able to find on Internet...
Well, none of them worked to my true satisfaction, so I've opted for a research to find a reliable and responsive way of checking if connection to a MongoDB Database Server is alive. And this without to block the application's synchronous execution for too long time period...
So here are my prerequisites:
Synchronous processing of the connection check
Short to very short time slice for the connection check
Reliability of the connection check
If possible, not throwing exceptions and not triggering timeouts
I've provided a fresh MongoDB Installation (version 3.6) on the default localhost URL: mongodb://localhost:27017. I've also written down another URL, where there was no MongoDB Database Server: mongodb://localhost:27071.
I'm also using the C# Driver 2.4.4 and do not use the legacy implementation (MongoDB.Driver.Legacy assembly).
So my expectations are, when I'm checking the connection to the first URL, it should give to me the Ok for a alive connection to an existing MongoDB server, when I'm checking the connection to the second URL it should give to me the Fail for a non-existing MongoDB server...
Using the IMongoDatabase.RunCommand method, queries the server and causes the server response timeout to elapse, thus not qualifying against the prerequisites. Furthermore after the timeout, it breaks with a TimeoutException, which requires additional exception handling.
This actual SO question and also this SO question have delivered the most of the start information I needed for my solution... So guys, many thanks for this!
Now my solution:
private static bool ProbeForMongoDbConnection(string connectionString, string dbName)
{
var probeTask =
Task.Run(() =>
{
var isAlive = false;
var client = new MongoDB.Driver.MongoClient(connectionString);
for (var k = 0; k < 6; k++)
{
client.GetDatabase(dbName);
var server = client.Cluster.Description.Servers.FirstOrDefault();
isAlive = (server != null &&
server.HeartbeatException == null &&
server.State == MongoDB.Driver.Core.Servers.ServerState.Connected);
if (isAlive)
{
break;
}
System.Threading.Thread.Sleep(300);
}
return isAlive;
});
probeTask.Wait();
return probeTask.Result;
}
The idea behind this is the MongoDB Server does not react (and seems to be non-existing) until a real attempt is made to access some resource on the server (for example a database). But retrieving some resource alone is not enough, as the server still has no updates to its state in the server's Cluster Description. This update comes first, when the resource is retrieved again. From this time point, the server has valid Cluster Description and valid data inside it...
Generally it seems to me, the MongoDB Server does not proactivelly propagate its Cluster Description to all connected clients. Rather then, each client receives the description, when a request to the server has been made. If some of you fellows have more information on this, please either confirm or deny my understandings on the topic...
Now when we target an invalid MongoDB Server URL, then the Cluster Description remains invalid and we can catch and deliver an usable signal for this case...
So the following statements (for the valid URL)
// The admin database should exist on each MongoDB 3.6 Installation, if not explicitly deleted!
var isAlive = ProbeForMongoDbConnection("mongodb://localhost:27017", "admin");
Console.WriteLine("Connection to mongodb://localhost:27017 was " + (isAlive ? "successful!" : "NOT successful!"));
will print out
Connection to mongodb://localhost:27017 was successful!
and the statements (for the invalid URL)
// The admin database should exist on each MongoDB 3.6 Installation, if not explicitly deleted!
isAlive = ProbeForMongoDbConnection("mongodb://localhost:27071", "admin");
Console.WriteLine("Connection to mongodb://localhost:27071 was " + (isAlive ? "successful!" : "NOT successful!"));
will print out
Connection to mongodb://localhost:27071 was NOT successful!
Here a simple extension method to ping mongodb server
public static class MongoDbExt
{
public static bool Ping(this IMongoDatabase db, int secondToWait = 1)
{
if (secondToWait <= 0)
throw new ArgumentOutOfRangeException("secondToWait", secondToWait, "Must be at least 1 second");
return db.RunCommandAsync((Command<MongoDB.Bson.BsonDocument>)"{ping:1}").Wait(secondToWait * 1000);
}
}
You can use it like so:
var client = new MongoClient("yourConnectionString");
var database = client.GetDatabase("yourDatabase");
if (!database.Ping())
throw new Exception("Could not connect to MongoDb");
This is a solution by using the try-catch approach,
var database = client.GetDatabase("YourDbHere");
bool isMongoConnected;
try
{
await database.RunCommandAsync((Command<BsonDocument>)"{ping:1}");
isMongoConnected = true;
}
catch(Exception)
{
isMongoConnected = false;
}
so when it fails to connect to the database, it will throw an exception and we can handle our bool flag there.
If you want to handle connection issues in your program you can use the ICluster.Description event.
When the MongoClient is created, it will continue to attempt connections in the background until it succeeds.
using MongoDB.Driver;
using MongoDB.Driver.Core.Clusters;
var mongoClient = new MongoClient("localhost")
mongoClient.Cluster.DescriptionChanged += Cluster_DescriptionChanged;
public void Cluster_DescriptionChanged(object sender, ClusterDescriptionChangedEventArgs e)
{
switch (e.NewClusterDescription.State)
{
case ClusterState.Disconnected:
break;
case ClusterState.Connected:
break;
}
}
I'm developing a frequently used command line tool which is powered by Azure Cosmos DB (SQL API version). It needs to check a few documents just after the launch, and I found that creating DocumentClient and finding the very collection will take up to 5 seconds in total.
So I'm wondering if there's any solutions to cache the DocumentClient or Database/DocumentCollection connections locally or other ways to improve the Cosmos DB related performance?
Here's my code --- I'm talking about the constructor:
public static class CacheUtils
{
private static readonly string DatabaseName = "myDatabase";
private static readonly string CollectionName = "myLruCache";
private static DocumentClient Client { get; }
private static Database Database { get; }
private static DocumentCollection DocumentCollection { get; }
static CacheUtils()
{
var connectionPolicy = new ConnectionPolicy
{
EnableEndpointDiscovery = true,
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp,
RequestTimeout = TimeSpan.FromSeconds(3),
RetryOptions = new RetryOptions
{
MaxRetryAttemptsOnThrottledRequests = 3,
MaxRetryWaitTimeInSeconds = 10
}
};
Client = new DocumentClient(new Uri(myEndpoint), myAccessToken, connectionPolicy);
Client.OpenAsync().GetResultSafe();
Database = Client.CreateDatabaseIfNotExistsAsync(new Database {Id = DatabaseName}).GetResultSafe().Resource;
DocumentCollection = Client.CreateDocumentCollectionIfNotExistsAsync(
Database.SelfLink,
new DocumentCollection {Id = CollectionName, DefaultTimeToLive = -1},
new RequestOptions {OfferThroughput = 1000}).GetResultSafe().Resource;
}
// Omit CRUD operation wrappers
}
To measure the time cost of the initialization process, a Stopwatch was added:
var s1 = new Stopwatch();
s1.Start();
Console.WriteLine($"[{s1.Elapsed.TotalSeconds:F3}] DocDB Start");
Client = new DocumentClient(new Uri(endpoint), accessToken, connectionPolicy);
Client.OpenAsync().GetResultSafe();
Console.WriteLine($"[{s1.Elapsed.TotalSeconds:F3}] DocDB Client Done");
Database = Client.CreateDatabaseIfNotExistsAsync(new Database { Id = DatabaseName }).GetResultSafe().Resource;
Console.WriteLine($"[{s1.Elapsed.TotalSeconds:F3}] DocDB DB Done");
DocumentCollection = Client.CreateDocumentCollectionQuery(Database.SelfLink).Where(c => c.Id == CollectionName).ToList().FirstOrDefault();
Console.WriteLine($"[{s1.Elapsed.TotalSeconds:F3}] DocDB Coll Done");
Ran it three times:
# 1
[0.000] DocDB Start
[3.064] DocDB Client Done
[3.143] DocDB DB Done
[3.363] DocDB Coll Done
# 2
[0.000] DocDB Start
[2.256] DocDB Client Done
[2.314] DocDB DB Done
[2.617] DocDB Coll Done
# 3
[0.000] DocDB Start
[2.684] DocDB Client Done
[2.788] DocDB DB Done
[3.331] DocDB Coll Done
You can store DocumentClient into a static variable and reuse it across app instances.
E.g.
public class CosmosDbRepo : ICosmosDbRepo
{
private static DocumentClient _cosmosDocumentClient;
public CosmosDbRepo(IDatabaseFactory databaseFactory, CosmosDbConnectionParameters cosmosDbConnectionParameters)
{
_collectionUri = UriFactory.CreateDocumentCollectionUri(cosmosDbConnectionParameters.DatabaseId, cosmosDbConnectionParameters.CollectionId);
if (_cosmosWriteDocumentClient == null)
{
_cosmosDocumentClient = databaseFactory.CreateDbConnection(cosmosDbConnectionParameters, ConnectionMode.Direct).DocumentClient;
}
}
}
_cosmosDocumentClient can then be used by multiple instances of your app.
I am currently developing an Azure function app that uses such static cosmosdb connection. Azure function app's instances share the static objects.
If you run and then shutdown the command line program, the static connection will have to be recreated each time the program starts up and you don't get much benefit out of static connection. Make the program running continuously will help. You may have multiple threads handling multiple instances of work units that deal with the jobs your program does, and these multiple instances can share the static cosmos db connection.
I'm streaming data into BQ with .NET API. And I noticed in Process Explorer that new TCP/IP connections are created and ended over and over again. I'm wondering if it's possible to reuse the connection and avoid big overhead of connection creation and end?
public async Task InsertAsync(BaseBigQueryTable table, IList<IDictionary<string, object>> rowList, GetBqInsertIdFunction getInsert,CancellationToken ct)
{
if (rowList.Count == 0)
{
return;
}
string tableId = table.TableId;
IList<TableDataInsertAllRequest.RowsData> requestRows = rowList.Select(row => new TableDataInsertAllRequest.RowsData {Json = row,InsertId = getInsert(row)}).ToList();
TableDataInsertAllRequest request = new TableDataInsertAllRequest { Rows = requestRows };
bool needCreateTable = false;
BigqueryService bqService = null;
try
{
bqService = GetBigQueryService();
TableDataInsertAllResponse response =
await
bqService.Tabledata.InsertAll(request, _account.ProjectId, table.DataSetId, tableId)
.ExecuteAsync(ct);
IList<TableDataInsertAllResponse.InsertErrorsData> insertErrors = response.InsertErrors;
if (insertErrors != null && insertErrors.Count > 0)
{
//handling errors, removed for easier reading..
}
}catch{
//... removed for easier reading
}
finally
{
if (bqService != null)
bqService.Dispose();
}
}
private BigqueryService GetBigQueryService()
{
return new BigqueryService(new BaseClientService.Initializer
{
HttpClientInitializer = _credential,
ApplicationName = _applicationName,
});
}
** Follow up **
The answer given below seems to be the only solution to reduce http connections. however, I found using batch request on large mount of live data streaming could have some limitation. see my another questions on this: Google API BatchRequest: An established connection was aborted by the software in your host machine
Below link documents how to batch API calls together to reduce the number of HTTP connections your client has to make
https://cloud.google.com/bigquery/batch
After batch request is issued, you can get response and parse out all involved jobids. As an alternative you can preset jobids in batch request for each and every inner request. Note: you need to make sure those jobids are unique
After that you can check what is going on with each of these jobs via jobs.get https://cloud.google.com/bigquery/docs/reference/v2/jobs/get