I need some help in the batchupdate function for Google Sheet api. Currently in my project I am doing some processing and call my createEntry function to add a MSG to a specific cell on the gsheet using the append function as shown below. I realised every time I call that it adds to my quota of API calls requests and Google api stops and hence my program.
To workaround I thought I would store my data in 2D array and update all oneshot towards the end using batchUpdate to avoid exceeding the number of user requests per 100secs and per user set by Google API limits.
My questions are:
Is my logic correct in using batchupdate to solve the user request to Google API problem (keeping in mind I might be adding thousands of data across 6 columns later)?
PS: I have tried already putting the thread to sleep for 20 secs after every 20 or so inserts, but that is not solving the problem as it is unpredictable. sometimes I can go up to 400 cells of data and sometimes it will stop at just 70.
if yes, how can I batchupdate insert various data points in a range of cells?
My code I am using currently is-
public void CreateEntry(string col,int ctw, string msg)
{
var range = $"{sheet}!" + col.ToString() + ctw.ToString();
var valueRange = new ValueRange();
var oblist = new List<object>() { };
oblist.Add(msg);
valueRange.Values = new List<IList<object>> { oblist };
var appendRequest = service.Spreadsheets.Values.Append(valueRange, SpreadsheetId, range);
appendRequest.ValueInputOption = SpreadsheetsResource.ValuesResource.AppendRequest.ValueInputOptionEnum.USERENTERED;
var appendReponse = appendRequest.Execute();
}
A simple sample of the gsheet data points for reference-
As of right now as you can see from my code above i am writing each cell individually. I want to now add all oneshot to avoid exceeding number of user requests per 100sec limit by Google.
Related
Once you install InfluxDB, 2 it surfaces a website that includes sample code for various languages. Having created a bucket and a token with RW permissions and selected them, snippets of code with appropriate magic strings are available. Putting them together I have this:
using System;
using System.Threading.Tasks;
using InfluxDB.Client;
using InfluxDB.Client.Api.Domain;
using InfluxDB.Client.Writes;
namespace gen
{
class Program
{
static async Task Main(string[] args)
{
// init
const string token = "uaKktnduBm_ranBVaG3y8vU-AAN ... w==";
const string bucket = "SystemMonitor";
const string org = "pdconsec";
var client = InfluxDBClientFactory.Create("http://10.1.1.182:8086", token.ToCharArray());
// write using data point (doesn't require model class)
var point = PointData
.Measurement("mem")
.Tag("host", "host1")
.Field("used_percent", 23.43234543)
.Timestamp(DateTime.UtcNow, WritePrecision.Ns);
using (var writeApi = client.GetWriteApi())
{
writeApi.WritePoint(bucket, org, point);
}
// Flux query
var query = $"from(bucket: \"{bucket}\") |> range(start: -1h)";
var tables = await client.GetQueryApi().QueryAsync(query, org);
}
}
}
The snippets demonstrate three different ways to write the same datum. All three execute without incident, but without data appearing in the bucket, so I have simplified the code here to just one write method. It runs without incident, but nothing appears in the bucket. Stepping through execution reveals that the Flux query executes returning an empty list of tables.
Do I need to create something inside a bucket or somehow assign it a structure corresponding to the shape of the data point?
Is there some kind of save, flush or commit that I have omitted?
That query looks to me like it means "everything from the named bucket that was logged in the last hour", is that right?
On the debug console an error message appears. You have to scroll back to see this message. It is buried in the usual avalanche of assembly load information when the application loads.
The batch item wasn't processed successfully because: InfluxDB.Client.Core.Exceptions.ForbiddenException: insufficient permissions for write
at InfluxDB.Client.WriteApi.<>c__DisplayClass9_2.<.ctor>b__21(RetryAttempt attempt)
at System.Reactive.Linq.ObservableImpl.SelectMany`2.ObservableSelector._.OnNext(TSource value) in /_/Rx.NET/Source/src/System.Reactive/Linq/Observable/SelectMany.cs:line 869
So why doesn't my token have write permission? I thought I specified RW. Revisiting token creation, it appears one must click the write permission to highlight it in order to assign it to the token being created.
In this example,
the token would be created with only write permission for the SystemMonitor bucket, because that bucket is highlighted only in the write column.
The way this UI works isn't really clear until you have more than one bucket in it, and then it becomes more obvious that the highlighted buckets in the Write (or read) column are the ones for which the token will have write (or read) permission.
I'm having a very hard time with what I feel should be a simple task. Every week, our team queries VMware vCenter for three pieces of output: VM counts in three different locations. Here is what it looks like:
Name Value
---- -----
locationA 1433
locationB 278
locationC 23
The information is emailed to our team, as well as some of the higher-ups who like to see the data. This is all automated with a Powershell script and Windows Task Scheduler running on a server, no problems.
That data is also placed in a Google sheet. We just append a new row with the date, and copy and paste the data into the three existing columns. It takes 30 seconds, once a week. Seems silly given how little time it takes to copy it over to the Google sheet but I really want to automate that last process using Google Sheets API.
I seem to keep finding and persuing what feel are online wild goose chases, in the Google scripting to accessing and editing Google sheets. I've downloaded and installed the Sheets API libraries, Drive API libraries, the Google .net library, set up the Google developer site, and run through the Google sheets API documentation and OAuth authenticating. I'm using Visual Studio 2013 because I figured that would play the best with Powershell and calling the .net commands.
I have pretty much no coding experience outside of Powershell (if you can call that coding). I can't even figure out how to pull the Google sheet, much less do anything to it. Nothing I've tried is working so far, and for what little time it takes to copy this info manually every week I've already spent so much more time than is probably worth it. I feel like if I can get a handle on this, that would open the door for further Google automation in the future since we operate with a Google domain. At any rate, help is very much appreciated.
Here is my latest scripting attempt in Visual Studio:
using System;
using Google.GData.Client;
using Google.GData.Spreadsheets;
namespace MySpreadsheetIntegration
{
class Program {
static void Main(string[] args)
{
string CLIENT_ID = "abunchofcharacters.apps.googleusercontent.com";
string CLIENT_SECRET = "secretnumber";
string REDIRECT_URI = "https://code.google.com/apis/console";
OAuth2Parameters parameters = new OAuth2Parameters();
parameters.ClientId = CLIENT_ID;
parameters.ClientSecret = CLIENT_SECRET;
parameters.RedirectUri = REDIRECT_URI;
parameters.Scope = SCOPE;
string authorizationUrl = OAuthUtil.CreateOAuth2AuthorizationUrl(parameters);
Console.WriteLine(https://code.google.com/apis/console);
Console.WriteLine("Please visit the URL above to authorize your OAuth "
+ "request token. Once that is complete, type in your access code to "
+ "continue..."));
parameters.AccessCode = Console.ReadLine();
OAuthUtil.GetAccessToken(parameters);
string accessToken = parameters.AccessToken;
Console.WriteLine("OAuth Access Token: " + accessToken);
GOAuth2RequestFactory requestFactory =
new GOAuth2RequestFactory(null, "MySpreadsheetIntegration-v1", parameters);
SpreadsheetsService service = new SpreadsheetsService("MySpreadsheetIntegration-v1");
service.RequestFactory = requestFactory;
var driveService = new DriveService(auth);
var file = new File();
file.Title = "VSI - VM Totals by Service TEST";
file.Description = string.Format("Created via {0} at {1}", ApplicationName, DateTime.Now.ToString());
file.MimeType = "application/vnd.google-apps.spreadsheet";
var request = driveService.Files.Insert(file);
var result = request.Fetch();
var spreadsheetLink = "https://docs.google.com/spreadsheets/d/GoogleDoc_ID";
Console.WriteLine("Created at " + spreadsheetLink);
End Class;
End Namespace;
}
}
}
For anyone still following this, I found a solution. I was going about this entirely the wrong way (or at least a way that I could comprehend). One solution to my issue was to create a new Google Script that only accessed my email once a week (after we got the report) and teased out everything but the data I was looking for and sent it to the Google spreadsheet.
Here's the script:
function SendtoSheet(){
var threads = GmailApp.search("from:THESENDER in:anywhere subject:THESUBJECTOFTHEEMAILWHICHNEVERCHANGES")[0];
var message = threads.getMessages().pop()
var bodytext = message.getBody();
counts = []
bodytext = bodytext.split('<br>');
for (var i=0 ; i < bodytext.length; i++){
line = bodytext[i].split(':');
if (line.length > 0){
if (!isNaN(line[1])){counts.push(line[1]);}}}
var now = new Date()
counts.unshift(Utilities.formatDate(now, 'EST', 'MM/dd/yyyy'))
var sheet = SpreadsheetApp.openById("GoogleDocID")
sheet = sheet.setActiveSheet(sheet.getSheetByName("Data by Week"))
sheet.appendRow(counts)
}
That Counts array contains the magic to extract the numeric data by breaking up by line breaks and :'s. Works perfectly. It didn't involve figuring out how to use Visual Studio, or the .net Google libraries, or editing the running PowerShell script. Clean and easy.
Hope this helps someone.
I have been working on an application that sends DOPU (drop-off/pick-up) requests for CCD documents via Health. Creating the DOPU requests and getting the corresponding token generated by HealthVault work fine.
There are two SDK methods I am using to get Meaningful Use report data right now:
OfflineWebApplicationConnection.GetMeaningfulUseTimelyAccessDOPUDocumentReport gets me all the DPU requests sent. This works fine, this always gives me the correct DOPU requests (with data/time stamp, token, and application ID).
The other is OfflineWebApplicationConnection.GetMeaningfulUseVDTReport method. This is the one causing problems. No matter what date range I set (a week, a month, Datetime.MinValue to DateTime.MaxValue), I always get no results. No matter how many time I go into my HV account, to view and download my connection DOPU documents. That SDK method still gives me no results.
I have also tried using CCD extension XML when sending a CCD to specifically set the patient-id and entry-date. Again, this doesn't affect my report results.
Does anyone else with more experience than I have with the Meaningful User methods in the SDK that I, have any suggestions on why I get nothing at all, ever, for the OfflineWebApplicationConnection.GetMeaningfulUseVDTReport call?
Here is some sample code that I am using to run the reports (some of the commented lines are just me trying different date ranges). I can also post snippets of code showing how I am sending the DOPU requests, even though that all seems to be behaving as expected.
class Program
{
static void Main(string[] args)
{
var applicationId = ConfigurationManager.AppSettings["ApplicationId"];
var url = ConfigurationManager.AppSettings["HealthServiceUrl"];
var connection = new OfflineWebApplicationConnection(new Guid(applicationId), url, Guid.Empty/* offlinePersonId */);
Console.WriteLine("\nGetMeaningfulUseTimelyAccessDOPUDocumentReport");
//var receipts = connection.GetMeaningfulUseTimelyAccessDOPUDocumentReport(new DateRange(new DateTime(2014, 11, 19), new DateTime(2014, 12, 19)));
var receipts = connection.GetMeaningfulUseTimelyAccessDOPUDocumentReport(new DateRange(DateTime.MinValue, DateTime.MaxValue));
//var receipts = connection.GetMeaningfulUseTimelyAccessDOPUDocumentReport(new DateRange(DateTime.UtcNow.AddMonths(-12), DateTime.UtcNow));
foreach (var receipt in receipts)
{
Console.WriteLine(string.Format("{0} - {1} - {2}", receipt.AvailableDate, receipt.PackageId, receipt.Source));
}
Console.WriteLine("\nGetMeaningfulUseVDTReport");
//var activities = connection.GetMeaningfulUseVDTReport(new DateRange(new DateTime(2000, 12, 3), new DateTime(2014, 12, 10)));
//var activities = connection.GetMeaningfulUseVDTReport(new DateRange(DateTime.MinValue, DateTime.MaxValue));
var activities = connection.GetMeaningfulUseVDTReport(new DateRange(DateTime.UtcNow.AddMonths(-12), DateTime.UtcNow.AddDays(1)));
foreach (var activity in activities)
{
Console.WriteLine(activity.PatientId);
}
Console.ReadLine();
}
}
Update 1
Tried the sample Meaningful Use web application that MS had on codeplex. Used it with our application ID/credentials. Well, it worked. Not sure what is different, at least, so far.
Update 2
So I have tried many other real CCDs (in our PPE enrionment, immediately deleting them when done) including test CCDs. I even set up the ConnectPackage in my app to behave the same as the test application from MS. No matter what I send, I get know Meaningful Use VDT data for the CCDs. The test CCD in the MS test application, however, works.
Update 3
Tried sending CCDs through the MS test application. Again, it sends and I can connect to an HV account with no problem. I get no VDT data, no matter the date range used. Maybe there is an issue with our CCDs?
I have about 50 background images for my site. What i am looking to do is randomly present the user with a different one for every visit. By this i mean they will surf through the site with the same background image during their visit.
After they close the browser and re-visit or come back and visit later, they then are presented with a new random background image. Don't need to save anything on what their previous background image was, just a random new one for each new visit to the site.
Not sure it this can be done with C#, Javascript, JQuery or CSS.
EDIT: I am using ASP.net 4.0 C# for my web app. Thanks
Don't use cookies as stated in the comments. This will only add extra bandwidth to the header messages sent to the server.
Instead, use local storage in the browser to save what the last image was they used. When a new session is started increment this value, and display the next image.
I've used jStorage on projects and it works fine.
You can save the currently shown image in their browsers storage, and maybe a session ID. Later, you can check if the session ID has changed. If so, then change to a different image.
var image = $.jStorage.get("image", 0);
var session_id = $.jStorage.get("session", "put current session id here");
if(session_id != "current session id")
{
image = (image < 50) ? 0 : image+1;
$.jStorage.set("image",image);
$.jStorage.set("session","current session id");
}
// use image to set background
EDIT:
Don't place this JavaScript in each web page. Instead, place it in a ASP.NET page that responses as a Javascript content type and load it via the page's header. This way page caching on the browser won't affect the script when the session changes.
Keep it in the Session. Pick it at random when it's not already in the session, it will stay the same as long as they're at your site -- and next time they come back, they'll get a new one.
For example (my C# is a little rusty):
public getBackground (HttpSessionState session) {
String bg = (string) session["session.randomBG"];
if (bg == null) {
// pick a random BG & store it.
bg = "pick one";
session["session.randomBG"] = bg;
}
return bg;
}
Hope this helps!
var list = [
"/images01.png",
"/images02.png",
...
];
/*background url*/ = list[Math.floor(Math.random()*list.length];
Sure it is possible. I will use pseudo-code here to show you how it could be done. Surely soon examples in Java will appear.
In the beginning of each page:
StartSession()
If ! SessionVariable[myBackground] then
x=Randomize 50
SessionVariable[myBackground]="image0" + x + ".jpg"
endif
<style>
body {background-image:url(SessionVariable[myBackground];}
</style>
Make sure you use the style tag where appropriate. The SessionVariable[myBackground] is user-created. In PHP it would look like this:
$_SESSION['myBackground']
Best wishes,
Try this function:
/**
* Change background image hourly.
* Name your images with 0.jpg, 1.jpg, ..., 49.jpg.
*/
function getBackground2() {
var expires = 3600000,
numOfImages = 50,
seed = Math.round(Date.now() / expires % numOfImages);
return '/path/to/background/' + seed + '.jpg';
}
I want to create multiple User Stories using the .NET REST API, but I only know how to create one-by-one.
I'm also getting a very slow response, average of 2 secs for each creation.
Can anybody help me?
Code that I'm using:
foreach(RCStoryRecord RecStory in ssStoryList)
{
StoryObject = new DynamicJsonObject();
RecStoryResult = new RCStoryResultRecord();
CreateResult creationResult = restApi.Create("HierarchicalRequirement", StoryObject);
}