How to fix the scaling/positioning problem in helix3d models? - c#

Problem:
I am trying to create and render detailed building models with helixtoolkit.wpf.sharpdx. And some part of the model (mostly detailed furnishing and piping structures inside the house) is mis-scaled and mis-positioned at the first sight.
I am new here, and don't have enough reputations to post images yet
1.first sight
2.lamps?
However, if I carefully move my viewpoint(camera?) into the house, and some of the inside structures start to show. After wondering around the entire inside space of the house, and pulling back my viewpoint to the outside world, the rendered scene seems perfect.
3.lamps!
4.correct scene
(The calculation and assembling process for the 3d model should be correct.)
Questions:
Is there any important step I'm missing?
Are there possible workarounds to ensure the scene is rendered correctly?
May I and should I try to "walk" the camera around programmatically to achieve the correctly rendered scene?
Hope that I've described my problem clear enough.
Any suggestion is helpful!
Model3DCollection collection; //pre-assembled data
ObservableElement3DCollection obCollection = new ObservableElement3DCollection();
foreach (var m3d in collection)
{
if (m3d is GeometryModel3D gm3D)
{
MeshGeometry3D meshGeometry3D = gm3D.Geometry as MeshGeometry3D;
HelixToolkit.Wpf.SharpDX.MeshGeometry3D meshGeometry = CastGeometry(meshGeometry3D);
var model = new MeshGeometryModel3D()
{
Material = CastMaterial(gm3D.Material),
Transform = m3d.Transform,
Geometry = meshGeometry,
CullMode = CullMode.Front
};
meshGeometry.UpdateOctree();
obCollection.Add(model);
}
}
OnPropertyChanged(nameof(obCollection));
//...

Are you using separate thread instead of UI thread to load your models?
Seems like the transforms are not getting updated.
Make sure following is running in UI thread:
var model = new MeshGeometryModel3D()
{
Material = CastMaterial(gm3D.Material),
Transform = m3d.Transform,
Geometry = meshGeometry,
CullMode = CullMode.Front
};
obCollection.Add(model);

Related

Azure Kinect DK, Body tracking: Mapping separate avatars on separate skeletons

To recreate my issue, I've setup a project in Unity3d 2020, using this sample project.
Here I can successfully map an avatar to a single skeleton. In my project, however, I want to map multiple skeletons - not only the closest, as the sample provides.
This far I've successfully rendered multiple skeletons, but when I try to map separate avatars on each of them - the two avatars will follow the motion of only one of the skeletons in the scene.
The following code is used to setup the avatars:
var avatarPrefab = Instantiate(avatarPrefab, new Vector3(0,0,0), Quaterion.identity);
avatarPrefab.GetComponent<PuppetAvatar>().KinectDevice = this;
avatarPrefab.GetComponent<PuppetAvatar>().CharacterRootTransform = avatarPrefab.transform;
avatarPrefab.GetComponent<PuppetAvatar>().RootPosition = this.transform.GetChild(i).transform.GetChild(0).gameObject; // gets the pelvis of each "rootBody" prefab.
The creator of the PuppetAvatar.cs script has yet to release any updates to support multibody tracking, but I posed a similar question in this thread.
Take a look at MoveBox: https://github.com/microsoft/MoveBox-for-Microsoft-Rocketbox
Movebox can parser the SMPL body models extracted by an external tool for 3D multi-person human pose estimation from RGB videos.
And it supports Azure Kinect DK

Unity Timeline Component Not being Referenced through Scripting

I followed a tutorial to create a text custom track in Unity Timeline. I want to bind the track's key to a subtitle GameObject with TextMeshProUGUI through script. Basically, I am using the method below:
playableDirector.SetGenericBinding(track, subtitle);
This is the outcome in the timeline:
This is the outcome in the timeline, if I just drag the same object:
The component is not appearing in the first picture, why?
Here is another test I did after manually referencing:
var subtitle = playableDirector.GetGenericBinding(track);
playableDirector.ClearGenericBinding(track);
playableDirector.SetGenericBinding(track, subtitle);
Debug.Log(subtitle.GetType());
I was not feeding the TextMeshProUGUI component into the method directly because I was getting an error inside the script editor. But after reading, components are considered Objects, and there wasn't a reason for it not to work, so I ran the script and it worked even though there were was a red line. I don't know why this happened, but here is the solution:
var subtitle = Subtitle.subtitle; // I referenced the component and made it static
playableDirector = GetComponent<PlayableDirector>();
timeline = playableDirector.playableAsset as TimelineAsset;
good = timeline.GetRootTrack(1);
bad = timeline.GetRootTrack(2);
var goodTracks = good.GetChildTracks() as List<TrackAsset>;
var badTracks = bad.GetChildTracks() as List<TrackAsset>;
playableDirector.SetGenericBinding(goodTracks[0], subtitle);
playableDirector.SetGenericBinding(badTracks[0], subtitle);

Disable/enable ARKit during runtime in Unity3d - C#

Iam working with Unity3d, using C# and the ARKit plugin (2.0 from Github)
In my current application iam using the ARKit for measuring distances. The tool iam creating needs this functionality only for this reason, so i was wondering how i could enable the ARKit, when the user needs the ruler and disable it, if not.
I want to avoid that there is some performance losing while the user is using a non ARKit Tool. Em i right if i would say, that the ARKit still works in the background, if you where initializing it once? Iam new on the ARKit thing, so i dont have an perfect overview of how to handle it.
To drop some Code lines makes no sence, its basically the plugin importet into the project, and my own script which depends on some functions - i didnt changed anything in the source code of the plugin. The measuring tool itself which i programmed works pretty well, but i could not determine how to activate and deactivate basically the ARKit.
Can someone help me out with this? When iam disabeling the GameObjects, the scripts are running at it seems to be a "dirty" method to avoid those functionallitys but i have to make it clean (for excample also the video map in the background needs to be disabled - and i guess those ARKit functions will not be paused or disabled, just because some scripts are disalbed, it seems the api still runs in the background because it lags when i do so)
If you need more informations, please let me know. Every help or suggestion would be very nice.
Thanks a lot!
The current ARKit API doesn't have a method to disable or enable it in Unity, during run-time at this point.
With than being said, Unity has its own function to enable and disable VR, AR or XR plugins. If ARKit is built correctly, this method should work. So, you might be able to disable/enable ARKit by setting XRSettings.enabled to false and enable it by setting it to true.
It's also a good idea to call XRSettings.LoadDeviceByName with an empty string, wait for frame, before setting XRSettings.enabled to false to disable it:
IEnumerator DisableAR()
{
XRSettings.LoadDeviceByName("");
yield return null;
XRSettings.enabled = false;
}
then call to disable:
StartCoroutine(DisableAR());
I guess I am answering a pretty old post. I found a way but I don't know if that is what you are expecting.
Like #Programmer told
The current ARKit API doesn't have a method to disable or enable it in Unity, during run-time at this point.
So the way that I used is incorporating Programmer's code along with that if you need the camera to render some skybox or solid color, I have done something like that in Non-AR mode by saving the current texture before changing it, as the live video is given as texture to the material and after saving that changed the texture to null and when you want to re-enable AR you set the textures back to the saved value and it gets loaded correctly.
bool ARMode;
bool isSupported;
Camera cam;
UnityARCameraManager ARCameraManager;
private Texture2D _videoTextureY;
private Texture2D _videoTextureCbCr;
private void Awake()
{
cam = Camera.main;
isSupported = FindObjectOfType<UnityARCameraManager>().sessionConfiguration.IsSupported;
ARMode = isSupported;
ARCameraManager = FindObjectOfType<UnityARCameraManager>();
}
void DisableAR()
{
XRSettings.enabled = false;
ARCameraManager.enabled = false;
_videoTextureY = (Texture2D)cam.GetComponent<UnityARVideo>().m_ClearMaterial.GetTexture("_textureY");
_videoTextureCbCr = (Texture2D)cam.GetComponent<UnityARVideo>().m_ClearMaterial.GetTexture("_textureCbCr");
cam.GetComponent<UnityARVideo>().m_ClearMaterial.SetTexture("_textureY", Texture2D.blackTexture);
cam.GetComponent<UnityARVideo>().m_ClearMaterial.SetTexture("_textureCbCr", Texture2D.blackTexture);
cam.clearFlags = CameraClearFlags.SolidColor;
cam.backgroundColor = Color.black;
cam.GetComponent<UnityARVideo>().enabled = false;
}
void EnableAR()
{
ARCameraManager.enabled = true;
XRSettings.enabled = true;
cam.clearFlags = CameraClearFlags.Depth;
cam.GetComponent<UnityARVideo>().m_ClearMaterial.SetTexture("_textureY", _videoTextureY);
cam.GetComponent<UnityARVideo().m_ClearMaterial.SetTexture("_textureCbCr", _videoTextureCbCr);
cam.GetComponent<UnityARVideo>().enabled = true;
}

C# XNA Loading in textures

I have having alot of issues loading in textures into my simple game. First off, I am able to load in a texture when im inside of "Game1.cs". However, I am currently trying to create a level. So I want to load in all the pictures in the Level class.
public Level(IServiceProvider _serviceProvider)
{
content = new ContentManager(_serviceProvider, "Content");
mNrOfTextures = 3;
mTextures[] = new Texture2D[mNrTextures];
mTextures[0] = Content.Load<Texture2D>("sky");
//And then more textures and other stuff..
}
But the program can never find the file sky. I dont really get any useful error messages and im moving away from any tutorials currently. Can anyone point me into the right direction?
Full path to file: C:\c++\ProjIV\ProjIV\ProjIVContent\
I personally just pass my ContentManager to my level class, instead of passing the service provider as others do.
In this case, you need to use your local content instance, not the static Content
mTextures[0] = content.Load<Texture2D>("sky");
EDIT: I see this did not work, can you attach a picture of your solution layout with the content?

Fellow Oak DICOM - changing image window level

I am not an experienced programmer, just need to add a DICOM viewer to my VS2010 project. I can display the image in Windows Forms, however can't figure out how to change the window center and width. Here is my script:
DicomImage image = new DicomImage(_filename);
int maxV = image.NumberOfFrames;
sbSlice.Maximum = maxV - 1;
image.WindowCenter = 7.0;
double wc = image.WindowCenter;
double ww = image.WindowWidth;
Image result = image.RenderImage(0);
DisplayImage(result);
It did not work. I don't know if this is the right approach.
The DicomImage class was not created with the intention of it being used to implement an image viewer. It was created to render preview images in the DICOM Dump utility and to test the image compression/decompression codecs. Maybe it was a mistake to include it in the library at all?
It is difficult for me to find fault in the code as being buggy when it is being used for something far beyond its intended functionality.
That said, I have taken some time to modify the code so that the WindowCenter/WindowWidth properties apply to the rendered image. You can find these modifications in the Git repo.
var img = new DicomImage(fileName);
img.WindowCenter = 2048.0;
img.WindowWidth = 4096.0;
DisplayImage(img.RenderImage(0));
I looked at the code and it looked extremely buggy. https://github.com/rcd/fo-dicom/blob/master/DICOM/Imaging/DicomImage.cs
In the current buggy implementation setting the WindowCenter or WindowWidth properties has no effect unless Dataset.Get(DicomTag.PhotometricInterpretation) is either Monochrome1 or Monochrome2 during Load(). This is already ridiculous, but it still cannot be used because the _renderOptions variable is only set in a single place and is immediately used for the _pipeline creation (not giving you chance to change it using the WindowCenter property). Your only chance is the grayscale _renderOptions initialization: _renderOptions = GrayscaleRenderOptions.FromDataset(Dataset);.
The current solution: Your dataset should have
DicomTag.WindowCenter set appropriately
DicomTag.WindowWidth != 0.0
DicomTag.PhotometricInterpretation == Monochrome1 or Monochrome2
The following code accomplishes that:
DicomDataset dataset = DicomFile.Open(fileName).Dataset;
//dataset.Set(DicomTag.WindowWidth, 200.0); //the WindowWidth must be non-zero
dataset.Add(DicomTag.WindowCenter, "100.0");
//dataset.Add(DicomTag.PhotometricInterpretation, "MONOCHROME1"); //ValueRepresentations tag is broken
dataset.Add(new DicomCodeString(DicomTag.PhotometricInterpretation, "MONOCHROME1"));
DicomImage image = new DicomImage(dataset);
image.RenderImage();
The best solution: Wait while this buggy library is fixed.

Categories