I'm trying to create an Android Plugin for Unity, and it's going fine as long as i don't need the current context. But if i try to open a simple alert box, the app crashes. Anyone any idea what i am doing wrong? it seems not so hard...
Code in Java:
public static void openAlert() {
new AlertDialog.Builder(UnityPlayer.currentActivity).setTitle("Test").setMessage("This is an alert box!").setNeutralButton("Ok", null).show();
}
from unity, i'm doing the following (c#):
using (AndroidJavaClass myUnityPlayer = new AndroidJavaClass("com.unity3d.player.UnityPlayer")) {
using (AndroidJavaObject obj_Activity = myUnityPlayer.GetStatic<AndroidJavaObject>("currentActivity")) {
AndroidJavaClass myActivity = new AndroidJavaClass("com.bundlename.appname.SampleClass");
myActivity.CallStatic("openAlert");
}
}
Since for some reason i don't get the crash messages from the device, it's a blind flight. I am new to c# and java development so excuse me if this is a stupid question.
Best
Wolfgang
First, anything you do that will modify the User Interface, such as Messages, Dialogs, Labels ... etc. should only be done from the main thread.
I dont know where the call your making is coming from, but if its not the main thread you may run into issues like this.
This video set helped me to build my android plugins.
Here is the playlist, he goes through the whole gambit. Ill start you off on the first android video.
http://www.youtube.com/watch?v=s1Mle2ERiuQ&list=PLf8PfKIJPGkjhMgylU87G5A0JLMSy_8ad
There are 3 android videos and all the examples work, just watch them in high definition so you can read the code being typed.
Related
some days ago, I wanted to make an application that could recognize Speech and turn it into text. I needed severell hours to make the System.Speech.Recognition run. I faced Issues that were questioned alot and answered always in another way. None worked for me. In the end I got it to work.
Today I started the program, and it worked fine. It could hear me and recognize the words I said. But about 3 hours later It completly stopped to work. All I did in the time was unplug my headset once and plug it in again. I changed nothing at the code. I didnt even restart Visual Studio. It was still running from before. I now also restarted the Computer without any success. I have absolutely no idea what happened. I got a Message that dont lead to an error (Searching for this Message did not help me in any way): "Information: 0 : SAPI does not implement phonetic alphabet selection."
I know this isnt much information, if you need some i did not mentioned, just ask. Can anyone help me out solving this?
Here the code:
using (recognizer = new SpeechRecognitionEngine(new System.Globalization.CultureInfo("de-DE")))
{
// Create and load a dictation grammar.
Choices services = new Choices(new string[] { "rennen", "laufen", "schleichen", "renn", "lauf", "schleich", "jetzt", "kiste", "Generator", "Stop", "Halt", "Warte", "rechts", "links", "Rückwärts", "hinten" });
// Create a Grammar object from the GrammarBuilder and load it to the recognizer.
Grammar servicesGrammar = new Grammar(services);
recognizer.LoadGrammarAsync(servicesGrammar);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
while (true)
{
Thread.Sleep(5);
}
}
public void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
//Nothing Important here, because it never reaches that point
}
For everyone that has the same Problem: Im sorry I cant give you an answer how it worked again. It suddenly worked again, without any changes... very strange behavior
Is it possible to keep audio from a WebView, in particular, an embedded Webtorrent client (which plays video), running in the background on android? I've seen conflicting answers on this, and I'm curious what you guys know about the topic. I've seen some confirmed answers on ways to do it in android studio but not seen any for Xamarin.
I've been told that the WebView is considered a UI element; therefore, this makes it impossible to keep the video/audio running while in the background. So if that's the case, do you think with some clever coding I could override the android OS to fool it into thinking that the WebView is still in the foreground?
I know that it's possible to keep the audio running using MediaPlayer, if for example say, you're playing an MP3.. So another possibility might be using a service to maintain audio focus; but then, would the video stop playing (seeing as how that doesn't fix WebView being a UI element)?
One other possibility would be porting the entire app into a service.. but I'm not sure if that's possible. If I get an answer that it is, I'll do the work to make it happen.
I'm not looking for you guys to do the coding; I'm just looking for guidance on which method (if any) would be possible/plausible/most effective.
and here is some sample code I'm currently using to construct my WebView (not sure if that matters)
//what's on
[Activity]
//this class should be an aggregate subscription feed
public class WhatsOnActivity : Activity
{
public override void OnBackPressed()
{
WebView whatsOnWebView = FindViewById<WebView>(Resource.Id.webViewWhatsOn);
whatsOnWebView.GoBack();
}
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
SetContentView(Resource.Layout.whatsOn);
//declare webview and tell our code where to find the XAML resource
WebView whatsOnWebView = FindViewById<WebView>(Resource.Id.webViewWhatsOn);
whatsOnWebView.SetBackgroundColor(Android.Graphics.Color.Green);
//set the webview client
whatsOnWebView.SetWebViewClient(new WebViewClient());
//load the 'whats on' url, will need webscript for curated subscribed feed
whatsOnWebView.LoadUrl("https://www.bitchute.com/#listing-subscribed");
//enable javascript in our webview
whatsOnWebView.Settings.JavaScriptEnabled = true;
//zoom control on? This should perhaps be disabled for consistency?
//we will leave it on for now
whatsOnWebView.Settings.BuiltInZoomControls = true;
whatsOnWebView.Settings.SetSupportZoom(true);
//scrollbarsdisabled
// subWebView.ScrollBarStyle = ScrollbarStyles.OutsideOverlay;
whatsOnWebView.ScrollbarFadingEnabled = false;
}
}
EDIT: Also, my opensource project can be found here
https://github.com/hexag0d/bitchute_mobile_android_a2
Thanks, in advance. =]
I think it is quite late to answer this question, but the project I am working on is similar to this one.
I am currently working on this project by calling Activity with WebView through Foreground Service.
This is the code that calls Activity with Webview in Service.
Put this in the onStartCommand() of the Service.
Context context = getApplicationContext();
Intent dialogIntent = new Intent(context, WebViewActivity.class);
dialogIntent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(dialogIntent);
I'm working on a Xamarin application, which I will at first have working on iOS, but plan to later expand to Android and other mobile platforms.
As such, I'm trying to keep as much common code in PCLs as possible.
My question: what is the best practise - in Xamarin.iOS for now - to initialize any dependent PCL code?
For now I have it in the RootViewController inside ViewDidLoad()
public override void ViewDidLoad()
{
base.ViewDidLoad();
_engine = new MyEngine();
View = new MainView(_engine);
}
Is this the right spot? I'd considered putting it in the ctor for the RootViewController, but there's a fair bit going on in the initialization code, which thus ran against "don't put heavy duty init code into a constructor".
Things that happen are:
Load app settings
If app is run for first time ever, load basic defaults
Initialise other PCL libraries, such as a TextToSpeech module, a state engine (hence the name of the class above), etc
Prepare a data grid based on XML or JSON input
Alternately, I though it should possibly go into the AppDelegate section, but that didn't sound right.
I'm still fairly new to mobile app dev in general and Xamarin in specific, though I've done C# native code for Windows for years. I just want to make sure I follow best practises, but there doesn't seem to be a "thou shalt" in this case.
Edit: I've extracted the solution based on #wishmaster's suggestions.
For iOS the Appdelegate method is the best place for initialization code. The appdelegate also provides multiple delegate methods to give you feedback on application lifecyle events such as the method "DidFinishLauchingWithOptions"
. if you have a lot of data to download or long running tasks that your app depends on I would suggest you take a look backgrounding for iOS.
A technique I have also used is for my first viewcontroller on IOS (or activity on Android) to display a splash screen and a loading indicator while i run some code to refresh the cache.
Using #wishmaster's pointers, this solution works like a charm:
In AppDelegate.cs
// in the global section put any data you may make available elsewhere
private var _engine;
public Engine => _engine;
public override bool FinishedLaunching(UIApplication app, NSDictionary options)
{
/*
* Do whatever init needs to happen here, if you need to make this
* available elsewhere, ensure you have properties or accessors,
* as above.
*/
_engine = new MyEngine();
return true;
}
Then in RootViewController.cs using a similar approach to these examples in Obc-C or Swift you can access the information through a property pointing at the AppDelegate.
var myappdelegate = UIApplication.SharedApplication.Delegate as AppDelegate;
var engine = myappdelegate.Engine;
View = new MainView(engine);
The result resulted in a snappier start up of the application, because the initialisation now happens during the splash screen and no longer between splash screen and appearance of the UI.
I am trying to make an application that sends keys to an external application, in this case aerofly FS. I have previously used the SendKeys.SendWait() method with succes, but this time, it doesn't quite work the way I want it to. I want to send a "G" keystroke to the application and testing it out with Notepad I do get G's. But in aerofly FS nothing is recieved at all. Pressing G on the keyboard does work though.
This is my code handling input data (from an Arduino) an sending the keystrokes,
private void handleData(string curData)
{
if (curData == "1")
SendKeys.SendWait("G");
else
{ }
}
I too have run into external applications where SendKeys didn't work for me.
As best I can tell, some applications, like applets inside a browser, expect to receive the key down, followed by a pause, followed by a key up, which I don't think can be done with SendKeys.
I have been using a C# wrapper to the AutoIt Library, and have found it quite easy to use.
Here's a link to quick guide I wrote for integrating AutoIt into a C# project.
Once you have the wrapper and references, you can send "G" with the following:
private void pressG()
{
AutoItX3Declarations.AU3_Send("{g}");
}
or with a pause,
private void pressG()
{
AutoItX3Declarations.AU3_Send("{g down}", 0);
AutoItX3Declarations.AU3_Sleep( 50 ); //wait 50 milliseconds
AutoItX3Declarations.AU3_Send("{g up}", 0);
}
AutoIt also allows you programmatically control the mouse.
I have a web application that, under some conditions, pop up JavaScript alert()s that I need to react to in a WatiN test. Google pointed me at Handling alerts in WATIN from way back in 2007 that seemed promising, and I adapted the example code in that post into the following (anonymized):
private void MyAssert(IE browser, WatinHelper helper)
{
AlertDialogHandler alertDialogHandler = new AlertDialogHandler();
using (new UseDialogOnce(browser.DialogWatcher, alertDialogHandler))
{
// DoWrong() causes a JavaScript alert(); false means use nowait.
DoWrong(helper, false);
alertDialogHandler.WaitUntilExists(10 /*seconds*/);
if (!alertDialogHandler.Exists())
{
Assert.Fail("No JavaScript alert when it should have been there");
}
alertDialogHandler.OKButton.Click();
}
SecondAssert(browser);
}
However, while the alert is displayed virtually instantaneously (as it is supposed to) when DoWrong() is called, the call to alertDialogHandler.WaitUntilExists() eventually fails with a WatiNException: Dialog not available within 10 seconds... The only problem was that I could see that the dialog most definitely was up on the screen.
I'm probably missing something simple; can someone point me in the right direction please?
I have also tried the following two variants, and some variations of them, with no luck; I keep getting the same error.
AlertDialogHandler alertDialogHandler = new AlertDialogHandler();
DoWrong(helper, false);
System.Diagnostics.Stopwatch stopwatch = new System.Diagnostics.Stopwatch();
stopwatch.Start();
do
{
}
while (!alertDialogHandler.Exists() && stopwatch.Elapsed.TotalMilliseconds < 3000);
Assert.IsTrue(alertDialogHandler.Exists(), "No JavaScript alert when it should have been there");
alertDialogHandler.OKButton.Click();
SecondAssert(browser);
and
AlertDialogHandler alertDialogHandler = new AlertDialogHandler();
browser.DialogWatcher.Add(alertDialogHandler);
DoWrong(helper, false);
alertDialogHandler.WaitUntilExists();
alertDialogHandler.OKButton.Click();
browser.WaitForComplete();
Assert.IsFalse(alertDialogHandler.Exists());
SecondAssert(browser);
Yes, I know that code is getting a bit ugly, but right now I'm mostly trying to get it to work at all. If it sits for a few seconds cooking the CPU at 100% utilization because of the tight loop in my second attempt, but only does what I need it to (plain and simple, dismiss that alert()), it's OK.
This is an issue with WatiN and IE8 and the way IE8 changed the way it creates popups. The issue is fixed in the current code available at the Sourceforge SVN repository for the project. Get it, compile it and your problem is solved.
A new release of WatiN will be available before the end of this year.
HTH,
Jeroen