So I am using the Microsoft.Office.Interop.Visio Class Library and I am trying to write C# to automate the creation of topology diagrams. This particular topology diagram needs to have three different network zones and depending on the purpose of the server, it will go into the proper network zone. The network zones are shapes themselves, as well as shapes for the servers.
My challenge is that when the program is fed a data file that specifies the type of servers needed, I need to make sure that the servers are put into the right zones and that the zone is resized to fit the shapes properly.
I am at a point where I can successfully add the server shapes to a new diagram and have them connect to each other using the Shape.Autoconnect method. However, I am stuck on figuring out how I am supposed to accomplish what I described above.
Any thoughts/guidance would be appreciated. Thank you!
Adding shapes to new diagram
Master visioConnectorShapeHttps = visioStencil.Masters.get_ItemU(#"HTTPS Line");
Master visioConnectorShapeSQL = visioStencil.Masters.get_ItemU(#"SQLConnection");
Master visioWebRoleMaster = visioStencil.Masters.get_ItemU(#"Azure PaaS WebRole Tier");
Shape visioWebRoleShape = visioPage.Drop(visioWebRoleMaster, 10, 16);
Master visioWorkerRoleMaster = visioStencil.Masters.get_ItemU(#"Azure PaaS WorkerRole Tier");
Shape visioWorkerRoleShape = visioPage.Drop(visioWorkerRoleMaster, 10, 10);
visioWebRoleShape.AutoConnect(visioWorkerRoleShape, VisAutoConnectDir.visAutoConnectDirDown, visioConnectorShapeHttps);
Master visioSQLIaaSMaster = visioStencil.Masters.get_ItemU(#"Azure IaaS Database Tier");
Shape visioSQLIaaSShape = visioPage.Drop(visioSQLIaaSMaster, 10, 3);
visioWorkerRoleShape.AutoConnect(visioSQLIaaSShape, VisAutoConnectDir.visAutoConnectDirDown, visioConnectorShapeSQL);
I would like to programmatically feed the app a data file and automate the creation of a topology diagram where server shapes properly fit into various proper network zone shapes.
Related
To recreate my issue, I've setup a project in Unity3d 2020, using this sample project.
Here I can successfully map an avatar to a single skeleton. In my project, however, I want to map multiple skeletons - not only the closest, as the sample provides.
This far I've successfully rendered multiple skeletons, but when I try to map separate avatars on each of them - the two avatars will follow the motion of only one of the skeletons in the scene.
The following code is used to setup the avatars:
var avatarPrefab = Instantiate(avatarPrefab, new Vector3(0,0,0), Quaterion.identity);
avatarPrefab.GetComponent<PuppetAvatar>().KinectDevice = this;
avatarPrefab.GetComponent<PuppetAvatar>().CharacterRootTransform = avatarPrefab.transform;
avatarPrefab.GetComponent<PuppetAvatar>().RootPosition = this.transform.GetChild(i).transform.GetChild(0).gameObject; // gets the pelvis of each "rootBody" prefab.
The creator of the PuppetAvatar.cs script has yet to release any updates to support multibody tracking, but I posed a similar question in this thread.
Take a look at MoveBox: https://github.com/microsoft/MoveBox-for-Microsoft-Rocketbox
Movebox can parser the SMPL body models extracted by an external tool for 3D multi-person human pose estimation from RGB videos.
And it supports Azure Kinect DK
How can I reset the scores of the game on certain days using firebase in "Unity"
I want the scores I marked in the picture to be reset every day, every week, at the end of every month, in short, when their time comes. How can I do this?
What you want to look into is the Cloud Functions feature called scheduled functions.
If you're only familiar with Unity, you'll want to follow this getting started guide for more details. The basic gist of it is that you'll create a tiny snippet of JavaScript that runs at a fixed schedule and lets you perform some administrative tasks on your database.
I'll try to encapsulate the basic setup:
install Node
run npm install -g firebase-tools
create a directory where you want to work on functions - you probably want to to do this outside of your Unity directory
run firebase login to log in to the Firebase CLI
run firebase init (or firebase init functions) and follow the steps in the wizard to create some functions code to test
when you're ready to use them in your game, you can use firebase deploy to send them off to the cloud.
From the Scheduled functions doc page, you can see this example of how to run a function every day:
exports.scheduledFunctionCrontab = functions.pubsub.schedule('5 11 * * *')
.timeZone('America/New_York') // Users can choose timezone - default is America/Los_Angeles
.onRun((context) => {
console.log('This will be run every day at 11:05 AM Eastern!');
return null;
});
You can use these with the Node Admin SDK. Something like:
// Import Admin SDK
var admin = require("firebase-admin");
// Get a database reference to our blog
var db = admin.database();
exports.scheduledFunctionCrontab = functions.pubsub.schedule('5 11 * * *')
.timeZone('America/New_York') // Users can choose timezone - default is America/Los_Angeles
.onRun((context) => {
db.ref(`users/${user_id}/`).update({userGameScore: 0, userMonthScore: 0, userScore: 0, userWeeklyScore: 0});
return null;
});
Of course, here I'm not iterating over user ids &c.
One final note: this is a very literal interpretation and answer to your question. It may be easier to (and save you some money if your game scales up) to write a score and timestamp (maybe using ServerValue.Timestamp) together into your database and just cause the scores to appear zeroed out as client logic. I would personally first try taking this approach and abandon it if it felt like it was getting too complex.
I'm working on a small personal application that should read some text (2 sentences at most) from a really simple Android screenshot. The text is always the same size, same font, and in approx. the same location. The background is very plain, usually a few shades of 1 color (think like bright orange fading into a little darker orange). I'm trying to figure out what would be the best way (and most importantly, the fastest way) to do this.
My first attempt involved the IronOcr C# library, and to be fair, it worked quite well! But I've noticed a few issues with it:
It's not 100% accurate
Despite having a community/trial version, it sometimes throws exceptions telling you to get a license
It takes ~400ms to read a ~600x300 pixel image, which in the case of my simple image, I consider to be rather long
As strange as it sounds, I have a feeling that libraries like IronOcr and Tesseract may just be too advanced for my needs. To improve speeds I have even written a piece of code to "treshold" my image first, making it completely black and white.
My current IronOcr settings look like this:
ImageReader = new AdvancedOcr()
{
CleanBackgroundNoise = false,
EnhanceContrast = false,
EnhanceResolution = false,
Strategy = AdvancedOcr.OcrStrategy.Fast,
ColorSpace = AdvancedOcr.OcrColorSpace.GrayScale,
DetectWhiteTextOnDarkBackgrounds = true,
InputImageType = AdvancedOcr.InputTypes.Snippet,
RotateAndStraighten = false,
ReadBarCodes = false,
ColorDepth = 1
};
And I could totally live with the results I've been getting using IronOcr, but the licensing exceptions ruin it. I also don't have $399 USD to spend on a private hobby project that won't even leave my own PC :(
But my main goal with this question is to find a better, faster or more efficient way to do this. It doesn't necessarily have to be an existing library, I'd be more than willing to make my own kind of letter-detection code that would work (only?) for screenshots like mine if someone can point me in the right direction.
I have researched about this topic and the best solution which I could find is Azure cognitive services. You can use Computer vision API to read text from an image. Here is the complete document.
How fast does it have to be?
If you are using C# I recommend the Google Cloud Vision API. You pay per request but the first 1000 per month are free (check pricing here). However, it does require a web request but I find it to be very quick
using Google.Cloud.Vision.V1;
using System;
namespace GoogleCloudSamples
{
public class QuickStart
{
public static void Main(string[] args)
{
// Instantiates a client
var client = ImageAnnotatorClient.Create();
// Load the image file into memory
var image = Image.FromFile("wakeupcat.jpg");
// Performs label detection on the image file
var response = client.DetectText(image);
foreach (var annotation in response)
{
if (annotation.Description != null)
Console.WriteLine(annotation.Description);
}
}
}
}
I find it works well for pictures and scanned documents so it should work perfectly for your situation. The SDK is also available in other languages too like Java, Python, and Node
I have having alot of issues loading in textures into my simple game. First off, I am able to load in a texture when im inside of "Game1.cs". However, I am currently trying to create a level. So I want to load in all the pictures in the Level class.
public Level(IServiceProvider _serviceProvider)
{
content = new ContentManager(_serviceProvider, "Content");
mNrOfTextures = 3;
mTextures[] = new Texture2D[mNrTextures];
mTextures[0] = Content.Load<Texture2D>("sky");
//And then more textures and other stuff..
}
But the program can never find the file sky. I dont really get any useful error messages and im moving away from any tutorials currently. Can anyone point me into the right direction?
Full path to file: C:\c++\ProjIV\ProjIV\ProjIVContent\
I personally just pass my ContentManager to my level class, instead of passing the service provider as others do.
In this case, you need to use your local content instance, not the static Content
mTextures[0] = content.Load<Texture2D>("sky");
EDIT: I see this did not work, can you attach a picture of your solution layout with the content?
I am using DirectShowLib.net in C# and I am trying to set change my ALLOCATOR_PROPERTIES settings, as I am using a live video source and changes are not "immediately" visible on screen.
When constructing a filter graph in GraphStudioNext the ALLOCATOR_PROPERTIES show for the upstream and the downstream pin, although only after connection.
I'd like to set the ALLOCATOR_PROPERTIES using IAMBufferNegotiation, but when trying to get the interface from my capture filter (AV/C Tape Recorder/Player) I get an E_UNEXPECTED (0x8000ffff) error. Here is the relevant C# code:
DS.IAMBufferNegotiation iamb = (DS.IAMBufferNegotiation)capturePin;
DS.AllocatorProperties allocatorProperties = new DS.AllocatorProperties();
hr = iamb.GetAllocatorProperties(allocatorProperties);
DS.DsError.ThrowExceptionForHR(hr);
When I used the downstream video decoder input pin, I get a System.InvalidCastException as the interface is not supported.
How I can I change the cBuffers value of ALLOCATOR_PROPERTIES?
Changing number of buffers is not going to help you here. The number of buffers are negotiated between filters are are, basically, not to be changed externally, however in your case there is no real buffering in the pipeline: once the video frame is available on the first output pin, then it immediately goes through and reaches video renderer. If you see a delay there, it means that either DV player has internal latency, or it is time stamping frames "late" and video renderer has to wait before presentation. You can troubleshoot the latter case by inserting Smart Tee Filter in between and connecting its Preview output pin downward to the video renderer - if this helps out, then the issue is frame time stamping on the source. Amount of buffers does not cause any presentation lag here.