I'm running Unity 2020.1.17f1
Complete noob question.
I'm using source code bought from the Unity asset store for a project. To integrate Firebase, I've followed the google guide here . https://firebase.google.com/docs/unity/setup#confirm-google-play-version but am unsure about Step 5 where it says to put this code at the 'start of the application'.
I think I recall reading elsewhere this step may not be needed in 2020 version of Unity. But no events are being recorded in the Firebase dashboard.
So any help in understanding how to implement Step 5?
Found the solution in the video below.
https://www.youtube.com/watch?v=SJRZOAA63aA
For step 5 I used this code which appeared in the video:
using System.Collections;
using UnityEngine;
using UnityEngine.Events;
public class FirebaseInitializer : MonoBehaviour
{
public UnityEvent onFirebaseInitialized;
private void Awake()
{
StartCoroutine(CheckAndFixDependenciesCoroutine());
}
private IEnumerator CheckAndFixDependenciesCoroutine()
{
var checkDependenciesTask = Firebase.FirebaseApp.CheckAndFixDependenciesAsync();
yield return new WaitUntil(() => checkDependenciesTask.IsCompleted);
var dependencyStatus = checkDependenciesTask.Result;
if (dependencyStatus == Firebase.DependencyStatus.Available)
{
Debug.Log($"Firebase: {dependencyStatus} :)");
onFirebaseInitialized.Invoke();
}
else
{
Debug.LogError(System.String.Format("Could not resolve all Firebase dependencies: {0}", dependencyStatus));
// Firebase Unity SDK is not safe to use here.
}
}
}
Related
I'm trying to use a cloud TTS within my Unity game.
With the newer versions (I am using 2019.1), they have deprecated WWW in favour of UnityWebRequest(s) within the API.
I have tried the Unity Documentation, but that didn't work for me.
I have also tried other threads and they use WWW which is deprecated for my Unity version.
void Start()
{
StartCoroutine(PlayTTS());
}
IEnumerator PlayTTS()
{
using (UnityWebRequestMultimedia wr = new UnityWebRequestMultimedia.GetAudioClip(
"https://example.com/tts?text=Sample%20Text&voice=Male",
AudioType.OGGVORBIS)
)
{
yield return wr.Send();
if (wr.isNetworkError)
{
Debug.LogWarning(wr.error);
}
else
{
//AudioClip ttsClip = DownloadHandlerAudioClip.GetContent(wr);
}
}
}
The URL in a browser (I used Firefox) successfully loaded the audio clip allowing me to play it.
What I want it to do is play the TTS when something happens in the game, it has been done within the "void Start" for testing purposes.
Where am I going wrong?
Thanks in advance
Josh
UnityWebRequestMultimedia.GetAudioClip automatically adds a default DownloadHandlerAudioClip which has a property streamAudio.
Set this to true and add a check for UnityWebRequest.downloadedBytes in order to delay the playback before starting.
Something like
public AudioSource source;
IEnumerator PlayTTS()
{
using (var wr = new UnityWebRequestMultimedia.GetAudioClip(
"https://example.com/tts?text=Sample%20Text&voice=Male",
AudioType.OGGVORBIS)
)
{
((DownloadHandlerAudioClip)wr.downloadHandler).streamAudio = true;
wr.Send();
while(!wr.isNetworkError && wr.downloadedBytes <= someThreshold)
{
yield return null;
}
if (wr.isNetworkError)
{
Debug.LogWarning(wr.error);
}
else
{
// Here I'm not sure if you would use
source.PlayOneShot(DownloadHandlerAudioClip.GetContent(wr));
// or rather
source.PlayOneShot((DownloadHandlerAudioClip)wr.downloadHandler).audioClip);
}
}
}
Typed on smartphone so no warranty but I hope you get the idea
Update
I've checked with the different version of unity, it is working with Unity 2018.2.6f1 Personal which is installed on another laptop. But I've Unity 2018.2.12f1 Personal which gives the error. Is it a unity error?
I am using basic free plan of vuforia and working with Cloud recognition with vuforia. Cloud recognition part is working fine and trackable handler print the cloud recognized image name too. But when I trying to enable tracking for the tacked image target, it is only working for the very first image. After the first one, it gives the following error:
TargetSearchResult cloud-image-name could not be enabled for tracking.
UnityEngine.Debug:LogError(Object)
Vuforia.TargetFinder:EnableTracking(TargetSearchResult, GameObject)
CloudRec:OnNewSearchResult(TargetSearchResult) (at
Assets/Scripts/CloudRec.cs:66)
Vuforia.ObjectRecoBehaviour:Update()
Above error indicates the following line as the issue:
m_ObjectTracker = TrackerManager.Instance.GetTracker<ObjectTracker>();
ImageTargetBehaviour imageTargetBehaviour = (ImageTargetBehaviour)m_ObjectTracker.TargetFinder.EnableTracking(targetSearchResult, ImageTargetTemplate.gameObject);
Tech version:
Vuforia version: 7.5.20 |
Unity 2018.2.12f1 Personal
Full code is here:
public class CloudRec : MonoBehaviour, ICloudRecoEventHandler
{
private CloudRecoBehaviour mCloudRecoBehaviour;
private bool mIsScanning = false;
private string mTargetMetadata = "";
public ImageTargetBehaviour ImageTargetTemplate;
ObjectTracker m_ObjectTracker;
TargetFinder m_TargetFinder;
// Use this for initialization
void Start()
{
// register this event handler at the cloud reco behaviour
mCloudRecoBehaviour = GetComponent<CloudRecoBehaviour>();
if (mCloudRecoBehaviour)
{
mCloudRecoBehaviour.RegisterEventHandler(this);
}
}
public void OnInitialized()
{
m_ObjectTracker = TrackerManager.Instance.GetTracker<ObjectTracker>();
Debug.Log("Cloud Reco initialized");
}
public void OnInitError(TargetFinder.InitState initError)
{
Debug.Log("Cloud Reco init error " + initError.ToString());
}
public void OnUpdateError(TargetFinder.UpdateState updateError)
{
Debug.Log("Cloud Reco update error " + updateError.ToString());
}
public void OnStateChanged(bool scanning)
{
mIsScanning = scanning;
if (scanning)
{
// clear all known trackables
ObjectTracker tracker = TrackerManager.Instance.GetTracker<ObjectTracker>();
tracker.TargetFinder.ClearTrackables(false);
}
}
// Here we handle a cloud target recognition event
public void OnNewSearchResult(TargetFinder.TargetSearchResult targetSearchResult)
{
GameObject newImageTarget = Instantiate(ImageTargetTemplate.gameObject) as GameObject;
GameObject augmentation = null;
if (augmentation != null)
augmentation.transform.SetParent(newImageTarget.transform);
if (ImageTargetTemplate)
{
m_ObjectTracker = TrackerManager.Instance.GetTracker<ObjectTracker>();
ImageTargetBehaviour imageTargetBehaviour = (ImageTargetBehaviour)m_ObjectTracker.TargetFinder.EnableTracking(targetSearchResult, ImageTargetTemplate.gameObject);
//ImageTracker imageTracker = TrackerManager.Instance.GetTracker<ImageTracker>();
//ImageTargetBehaviour imageTargetBehaviour = (ImageTargetBehaviour)imageTracker.TargetFinder.EnableTracking(targetSearchResult, newImageTarget);
}
if (mIsScanning)
{
mCloudRecoBehaviour.CloudRecoEnabled = true;
}
}
// Update is called once per frame
void Update()
{
}
public void OnInitialized(TargetFinder targetFinder)
{
m_ObjectTracker = TrackerManager.Instance.GetTracker<ObjectTracker>();
m_TargetFinder = targetFinder;
}
}
After almost a week of the search, I got the error cause. When running with unity the error occurs but when I build to Android or iOS it is working fine. So stopped doubt on the code and it made me think out of the box. So I decided to test on various versions of unity and vuforia with the same machine. It doesn't help to overcome the error. Eventually, I've tested with other machines and I got the error cause. It's because of hardware compatibility.
In my case, I am using mac pro-2009 mid which is not supporting ObjectTracking But I tested with the same code and same versions of tech on MacBook Air 2017 and Mac Pro mid-2014 it is working fine. So I conclude this as a hardware compatibility issue!
I'm trying to use the speech recognition functionality in Unity, but when I try to bring it it, Visual Studio isn't recognizing it.
Here's my code:
using UnityEngine;
using System;
using System.Text;
using System.Collections;
using System.Collections.Generic;
using UnityEngine.Windows.Speech;
using System.Linq;
public class VoiceRecog : MonoBehaviour {
private KeywordRecognizer m_Recognizer;
public KeywordRecognizer keywordRecognizer;
protected Dictionary<string, System.Action> keywords = new Dictionary<string, System.Action>();
void Start() {
Debug.Log("In the Start() of VoiceRecog");
keywords.Add("go", () =>
{
GoCalled();
});
keywordRecognizer = new KeywordRecognizer(keywords.Keys.ToArray());
keywordRecognizer.OnPhraseRecognized += KeywordRecognizerOnPhraseRecognized;
}
void KeywordRecognizerOnPhraseRecognized(PhraseRecognizedEventArgs args) {
Debug.Log("in 2nd function");
System.Action keywordAction;
if (keywords.TryGetValue(args.text, out keywordAction)) {
keywordAction.Invoke();
}
}
void GoCalled() {
Debug.Log("You just Said Go.");
}
}
Unity isn't taking the KeyWordRecognizer type. I think because it's not bringing in UnityEngine.Windows.Speech.
Any ideas about why unityengine isn't being brought in?
To use KeyWordRecognizer, you must include UnityEngine.Windows.Speech at the top. You did this but KeyWordRecognizer is not still recognized.
The possible problem is that you are using older version of Unity. You must have Unity 5.4.0 and above in other to use KeyWordRecognizer.
Unity 5.4.0 Release Note:
Windows: Added speech recognition APIs under
UnityEngine.Windows.Speech. These APIs are supported on all
Also, it is now very easy to find out which version of Unity an API was added. This is a reference for you next time you encounter this problem.
Simply search and find the API then keep bringing the version number down until your current Unity version. The number changes by 10 like 550,540,530....
Available:
Not Available:
Can anyone please tell me how to use the FBNativeAdBridgeCallback function? i basically want to know when the images are finished loaded so that i can display them. . else i have a native banner flying around with empty values until Facebook has finished loading them. and it looks really ugly if each image/text loads in one at a time.
Its just the basic native ads sample script, i tried using IResult like one does with the login, but that makes it red. there is no documentation on this on the internet, not even on the Facebook developer site, i can't find the api at all.
Can anyone please explain to me how to use it in the script provided below?
using UnityEngine;
using UnityEngine.UI;
using System;
using System.Collections;
using System.Collections.Generic;
using AudienceNetwork;
[RequireComponent (typeof(CanvasRenderer))]
[RequireComponent (typeof(RectTransform))]
public class NativeAdTest : MonoBehaviour
{
private NativeAd nativeAd;
// UI elements in scene
[Header("Text:")]
public Text
title;
public Text socialContext;
[Header("Images:")]
public Image
coverImage;
public Image iconImage;
[Header("Buttons:")]
public Text
callToAction;
public Button callToActionButton;
public GameObject hide;
void Awake ()
{
// Create a native ad request with a unique placement ID (generate your own on the Facebook app settings).
// Use different ID for each ad placement in your app.
NativeAd nativeAd = new AudienceNetwork.NativeAd ("your placement id");
this.nativeAd = nativeAd;
// Wire up GameObject with the native ad; the specified buttons will be clickable.
nativeAd.RegisterGameObjectForImpression (gameObject, new Button[] { callToActionButton });
// Set delegates to get notified on changes or when the user interacts with the ad.
nativeAd.NativeAdDidLoad = (delegate() {
Debug.Log ("Native ad loaded.");
Debug.Log ("Loading images...");
// Use helper methods to load images from native ad URLs
StartCoroutine (nativeAd.LoadIconImage (nativeAd.IconImageURL));
StartCoroutine (nativeAd.LoadCoverImage (nativeAd.CoverImageURL));
Debug.Log ("Images loaded.");
title.text = nativeAd.Title;
socialContext.text = nativeAd.SocialContext;
callToAction.text = nativeAd.CallToAction;
Debug.Log ("Native ad Luke.");
// hide.SetActive(false);
//FBNativeAdBridgeCallback
});
nativeAd.NativeAdDidFailWithError = (delegate(string error) {
Debug.Log ("Native ad failed to load with error: " + error);
});
nativeAd.NativeAdWillLogImpression = (delegate() {
Debug.Log ("Native ad logged impression.");
});
nativeAd.NativeAdDidClick = (delegate() {
Debug.Log ("Native ad clicked.");
});
// Initiate a request to load an ad.
nativeAd.LoadAd ();
//nativeAd.nat
}
void OnGUI ()
{
// Update GUI from native ad
coverImage.sprite = nativeAd.CoverImage;
iconImage.sprite = nativeAd.IconImage;
}
void OnDestroy ()
{
// Dispose of native ad when the scene is destroyed
if (this.nativeAd) {
this.nativeAd.Dispose ();
}
Debug.Log ("NativeAdTest was destroyed!");
}
// void FBNativeAdBridgeCallback(IResult result)
// {
//
// }
}
Thanks in advance!
AdView does not allow you to define a customized listener from FBNativeAdBridgeCallback(). AdView already provided 5 callbacks includes NativeAdDidLoad().
The solution to your question is to hide the ad first, then after you received the NativeAdDidLoad() and finished loading images, bring the ad to front and make it visible. There is no need to create a new FBNativeAdBridgeCallback().
I was wondering if there are any SWF workflow C# sample code available for the AWS .NET SDK?
AWS Forum Post: https://forums.aws.amazon.com/thread.jspa?threadID=122216&tstart=0
As part of getting familiar with SWF, I ended up writing a common case library that I hope others can use as well. It's called SimpleWorkflowFramework.NET and is available as open source at https://github.com/sdebnath/SimpleWorkflowFramework.NET. It definitely could use a lot of help, so if you are interested, jump right in! :)
I have developed an open source .NET library- Guflow to program Amazon SWF. Here is how you can write a workflow to transcode the video:
[WorkflowDescription("1.0")]
public class TranscodeWorkflow : Workflow
{
public TranscodeWorkflow()
{
//DownloadActivity is the startup activity and will be scheduled when workflow is started.
ScheduleActivity<DownloadActivity>().OnFailure(Reschedule);
//After DownloadActivity is completed TranscodeActivity activity will be scheduled.
ScheduleActivity<TranscodeActivity>().AfterActivity<DownloadActivity>()
.WithInput(a => new {InputFile = ParentResult(a).DownloadedFile, Format = "MP4"})
ScheduleActivity<UploadToS3Activity>().AfterActivity<TranscodeActivity>()
.WithInput(a => new {InputFile = ParentResult(a).TranscodedFile});
ScheduleActivity<SendConfirmationActivity>().AfterActivity<UploadToS3Activity>();
}
private static dynamic ParentResult(IActivityItem a) => a.ParentActivity().Result();
}
In above example I have left out task routing for clarity.
Here is how you can create an activity:
[ActivityDescription("1.0")]
public class DownloadActivity : Activity
{
//It supports both sync/async method.
[ActivityMethod]
public async Task<Response> Execute(string input)
{
//simulate downloading of file
await Task.Delay(10);
return new Response() { DownloadedFile = "downloaded path", PollingQueue = PollingQueue.Download};
}
public class Response
{
public string DownloadedFile;
}
}
For clarity I'm leaving out examples of other activities. Guflow it supported by documentation, tutorial and samples.