I'm trying to get touch positions in a MonoGame I'm running on a windows Phone 8 device and debugging through Visual Studio 2012.
This is my update method:
public override void Update(Microsoft.Xna.Framework.GameTime time)
{
if (_StateIsActive) // for pausing game
{
base.Update(time);
TouchCollection touchCollection = TouchPanel.GetState();
foreach (TouchLocation tl in touchCollection)
{
if ((tl.State == TouchLocationState.Pressed)
|| (tl.State == TouchLocationState.Moved))
{
Debug.WriteLine(tl.Position.ToString());
}
}
}
}
When I touch the screen all I get in the output is:
{X:INF Y:INF}
What am I doing wrong?
EDIT: I tried to do what has been done in this thread, but it wont work as I only get the INF value. MonoGame reading touch gestures
The answer seems to be that there is a bug in this version of Mono Game (3.0.1.0) related to this and that it will be fixed with a new release within the next few days.
Source: #MonoGame channel on IRC and their forums.
Related
I'm coding a gamification application where the interactions with the end user is performed with microphone and webcam. I'm using a 2019 mac mini with unity 3D 2018.
During the coding session with the unity editor everything works fine, but when I build for deployment the application becomes too slow and freezes when it tries to access webcam and microphone.
I gave all permissions for using these devices.
The build-ed application works well with windows and android.
Why I cannot use Microphone with the deployed app on MacOS?
The same code works good on other platform.
Below I publish the piece of code in which I use the microphone:
public void onPointerDown()
{
mSpeechToTextManagerGoogle.audioSource.clip = Microphone.Start("", true, 30, 44100);
mSpeechToTextManagerGoogle.start = Time.time;
mSpeechToTextManagerGoogle.audioSource.loop = true;
while (!(Microphone.GetPosition(null) > 0))
{
}
}
public void onPointerUp()
{
Microphone.End("");
mSpeechToTextManagerGoogle.end = Time.time;
StartCoroutine(mSpeechToTextManagerGoogle.SendRequestToGoogle(mSpeechToTextManagerGoogle.audioSource.clip, 0, (mSpeechToTextManagerGoogle.end - mSpeechToTextManagerGoogle.start)));
}
Try with this:
mSpeechToTextManagerGoogle.audioSource.clip = Microphone.Start("Built-in Microphone", true, 30, 44100);
I am developing an app in unity in which the user can take photos using their device camera. This is working great using Unity's webcamtexture. However, there is no flash support for webcamtexture, so I have written my own code to access the device Torch. The code WORKS - However it doesn't work while streaming the webcamtexture (the camera is in use so the java service call returns an error). Does anyone have any suggestions for how to get around this issue? Is there any way to use Unity's WebCamTexture to activate the camera torch? Here is my code for activating the camera torch:
AndroidJavaClass cameraClass = new AndroidJavaClass("android.hardware.Camera");
// This is an ugly hack to make Unity
// generate Camera permisions
WebCamDevice[] devices = WebCamTexture.devices;
int camID = 0;
camera = cameraClass.CallStatic<AndroidJavaObject>("open", camID);
// I'm pretty sure camera will never be null at this point
// It will either be a valid object or Camera.open would throw an exception
if (camera != null)
{
AndroidJavaObject cameraParameters = camera.Call<AndroidJavaObject>("getParameters");
cameraParameters.Call("setFlashMode","torch");
camera.Call("setParameters",cameraParameters);
Active = true;
}
Try checking out camera capture kit for unity. It gives that functionality you want for Android as well as the source code for it.
https://www.assetstore.unity3d.com/en/#!/content/56673
I've got real device Nokia Lumia 730 (windows phone 8.1) for testing my game cocos2d-x. And I've discovered multitouch doesn't work.
First of all sample code:
auto listener = EventListenerTouchAllAtOnce::create();
listener->onTouchesBegan = [&](std::vector touches, Event* event){
log("onTouchesBegan: %d", touches.size());
}
listener->onTouchesMoved = [&](std::vector touches, Event* event){
log("onTouchesMoved: %d", touches.size());
}
listener->onTouchesEnded = [&](std::vector touches, Event* event){
log("onTouchesEnded: %d", touches.size());
}
_eventDispatcher->addEventListenerWithSceneGraphPriority(listener, this);
No matter how many fingers I'm touching it always says 1.
I ran the same app on Android (samsung galaxy s3) and it's working well.
As I found out there's no need/no way to enable multitouch on windows phone and it's already enabled by default.
Second, I made simple test inside my win8.1-xaml project (didn't test win8.1-universal, can't use it anyway, because of lacking 3rd party libs support like admob):
Touch.FrameReported += Touch_FrameReported;
void Touch_FrameReported(object sender, TouchFrameEventArgs e)
{
if (Visibility == Visibility.Collapsed)
return;
TouchPointCollection pointCollection = e.GetTouchPoints(this);
Debug.WriteLine("points touched: " + pointCollection.Count);
}
And this actually works and prints correct values.
So running these 2 pieces of code at the same time produces for example this:
onTouchesBegan: 1 <-- cocos2d-x part
points touched: 2 <-- c# project part
Cocos thinks there's only one touch, however wp knows there's more (2).
Help!
For some reason I cannot get Monogame to play any sounds through SoundEffect or SoundEffectInstance.
With a workaround I can get Songs to play, but still not SoundEffects or SoundEffectInstances.
If I run my example below through “native XNA” everything works fine.
I’m using:
Monogame 3.2 for Windows Desktop (DirectX)
Windows 7
Visual Studio Express 2013
Example:
SoundEffect effect;
SoundEffectInstance instance;
Song song;
protected override void LoadContent()
{
// Load sound, no errors and the objects get filled with data.
effect = Content.Load<SoundEffect>("myWavFileAsSoundEffect"); // Loaded with ContentProcessor = "Sound Effect - XNA Framework"
song = Content.Load<Song>("myWavFileAsSong"); // Loaded with ContentProcessor = "Song - XNA Framework"
instance = effect.CreateInstance();
// Set volume to 100%, just in case
SoundEffect.MasterVolume = 1.0f;
MediaPlayer.Volume = 1.0f;
instance.Volume = 1.0f;
}
protected override void Update(GameTime gameTime)
{
if (Keyboard.GetState().IsKeyDown(Keys.Space))
{
// Play instance. Nothing happens.
instance.Play();
// Play effect. Nothing happens.
bool success = effect.Play(1.0f, 0.0f, 0.0f);
// success is true
// Play song.
try
{
// Error
// HRESULT: [0x80004002], Module: [General], ApiCode: [E_NOINTERFACE/No such interface supported]
MediaPlayer.Play(song);
}
catch (Exception)
{
// Play the song again
// Plays fine
MediaPlayer.Play(song);
}
}
base.Update(gameTime);
}
Does anyone know what might be wrong? Why can’t I play any SoundEffects or SoundEffectInstances?
I had the exact same problem but on Windows 10. The solution for me was to reinstall Directx.
Webinstaller: https://www.microsoft.com/en-us/download/details.aspx?id=35&84e4d527-1a2f-c70a-8906-a877ec4baada=1
Hope this helps
Even I was having problem with when added .wav files.But when I added .xnb files which is generated after building the XNA project(files are in bin\content folder),errors gone.Could you please try with xnb files.
I created a simple Xamarin MonoMac Mac application ( bare bones 1 window ). I am trying to get a preview of my webcam onto the Main Window of the application. I use the AVFoundation classes to get access to the webcam.
I can successfully connect to the mac's webcam, and the webcam light turns on but I get no video on the Main Window.
My c# code
session = new AVCaptureSession () { SessionPreset = AVCaptureSession.PresetMedium };
var captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video);
var input = AVCaptureDeviceInput.FromDevice (captureDevice);
if (input == null){
Console.WriteLine ("No input - this won't work on the simulator, try a physical device");
}
session.AddInput (input);
AVCaptureVideoPreviewLayer captureVideoPreviewLayer = new AVCaptureVideoPreviewLayer (session);
**//----> preview layer onto Window here ?!**
session.StartRunning ();
This fixed it - hopefully it will help someone ..
AVCaptureVideoPreviewLayer captureVideoPreviewLayer = AVCaptureVideoPreviewLayer.FromSession (session);
this.Window.ContentView.Layer=captureVideoPreviewLayer;
this.Window.ContentView.WantsLayer=true;
session.StartRunning ();