Android's - performHapticFeedback vs Vibrator - documentation and use - c#

TL;DR:
Would appreciate any extra information on Android's
abstract class Vibrator
vs
performHapticFeedback
Preferably avoiding the use of the Vibrator class and prioritising performHapticFeedback to circumvent having to ask the user for permissions, and rely only on their system's preference.
Scenario:
I'm working with Xamarin trying to implement Haptic Feedback for Android and iOS.
Now, whereas the iOS documentation has a short explanation, which I've implemented as the following:
void Platform.Vibrate(HapticsIntensity HapticsIntensity)
{
UIKit.UIImpactFeedbackGenerator ImpactFeedbackGenerator;
switch (HapticsIntensity)
{
case HapticsIntensity.Light:
ImpactFeedbackGenerator = new UIKit.UIImpactFeedbackGenerator(UIKit.UIImpactFeedbackStyle.Light);
break;
case HapticsIntensity.Medium:
ImpactFeedbackGenerator = new UIKit.UIImpactFeedbackGenerator(UIKit.UIImpactFeedbackStyle.Medium);
break;
case HapticsIntensity.Heavy:
ImpactFeedbackGenerator = new UIKit.UIImpactFeedbackGenerator(UIKit.UIImpactFeedbackStyle.Heavy);
break;
default:
ImpactFeedbackGenerator = null;
break;
};
if (ImpactFeedbackGenerator != null)
{
ImpactFeedbackGenerator.Prepare();
ImpactFeedbackGenerator.ImpactOccurred();
}
}
The Android documentation for Haptic Feedback states that the method performHapticFeedback expects a HapticFeedbackConstant as a parameter.
public boolean performHapticFeedback (int feedbackConstant)
The available feedbackConstant's are here, but they seem to have no difference between them.
Calling:
LongPress
Engine.AndroidActivity.Window.DecorView.PerformHapticFeedback(Android.Views.FeedbackConstants.LongPress);
has the same effect as
VirtualKey
Engine.AndroidActivity.Window.DecorView.PerformHapticFeedback(Android.Views.FeedbackConstants.VirtualKey);
or
KeyboardTap
Engine.AndroidActivity.Window.DecorView.PerformHapticFeedback(Android.Views.FeedbackConstants.KeyboardTap);
moreover, some of the FeedbackConstants don't even result in haptic feedback.
Does anyone know where I could find any more documentation around this matter?
The reason why I ask is that I am implementing an abstract layer over Xamarin with Invention where my intention is to have my method calls like:
Vibrate(HapticsIntensity.Light);
Vibrate(HapticsIntensity.Medium);
Vibrate(HapticsIntensity.Heavy);
This works today, but where on iOS I get the tactile feedback of Light, Medium and Heavy vibration, on Android, I can't differentiate between them.
Now, I know Android has a Vibrate class (see here), which allows for granular control; however, to use this I need to add to my manifest or ask for specific permissions for my app (android.permission.VIBRATE), and that is not optimal.
Also, if I add the android.permission.VIBRATE permission to my manifest, it seems like (if the device has Haptic Feedback enabled in its settings), I don't even need to add the Vibrate() method call to my buttons` onClick; and they will already provide the tactile feedback (BZZZTT!!1!).

It totally depends upon if the device OEM has altered ASOP code and the vibration timing arrays in com.android.internal.R.array resource to enable a special haptic feedback "engine" that they are using on their device.
By default, the hardware OEMs are only required to support (in hardware) a standard on/off vibration (linear actuator, weighted rotary, etc..), not a "true" haptic feedback one which is normally based upon waveforms.
In comparison to the newer iOS devices (7|8+?), they are using the "Taptic Engine" (fancy speak for an "advanced linear actuator") for haptic feedback and only recently are Android devices "catching" up on the hardware side (new OnePlus, Pixel 3s, etc...) are starting to include more advanced haptic/vibration hardware (whether are not the OEM has done any special with that new hardware, you decide...)
So if you look at ASOP's PhoneWindowManager.java you will find that most of the HapticFeedbackConstants get lumped into a few VibrationEffects constants such as:
~~~
VibrationEffect.EFFECT_TICK
VibrationEffect.EFFECT_CLICK
VibrationEffect.EFFECT_HEAVY_CLICK
~~~
Look at the source if you want to see what the ASOP default VibrationEffects would be for a specific HapticFeedbackConstants:
PhoneWindowManager.java
If you have to provide manual-based haptic for your app for some reason, you can use the Vibrator API and provide the byte array for your on/off timing and then special case it for phone devices at offer more hardware features.

Related

Turn on Airplane mode using AccessibilityService Class in Xamarin

I want to turn on or off airplane mode using AccessibilityService.
Any idea how we can do it?
Yes, you can't change it from app that target bigger than Android 4.2. But you can open the settings page instead if you want:
if (Android.OS.Build.VERSION.SdkInt < BuildVersionCodes.JellyBeanMr1)
{
try
{
Intent intentAirplaneMode = new Intent(Android.Provider.Settings.ActionAirplaneModeSettings);
intentAirplaneMode.SetFlags(ActivityFlags.NewTask);
Context.StartActivity(intentAirplaneMode);
}
catch (ActivityNotFoundException e)
{
Log.Error("exception", e + "");
}
}
else
{
Intent intent1 = new Intent("android.settings.WIRELESS_SETTINGS");
intent1.SetFlags(ActivityFlags.NewTask);
Context.StartActivity(intent1);
}
}
And AccessibilityService can used with dependency service.
Kamal you’re not going to be able to do it.
It doesn’t seem like you’re doing iOS, but iOS has a lot of limitations due to privacy and security purposes that won’t allow you to do this. You can see more details here stackoverflow.com/q/20469425/11104068
Also android blocked being able to do this from Android 4.2 onwards. Only system apps can make changes to Airplane mode, as you can see here stackoverflow.com/a/5533943/11104068
Since it doesn’t seem you’re creating a system app that gets installed with the operating system, and not through the Play Store, you won’t be able to get permissions. It will give you an error /exception even if you implement everything

How to enable LightningProviders for GPIO?

I am trying to access the GPIO on my custom SBC using Windows 10 IoT Core. I have discovered that I must use LightningProviders to accomplish this . So I tried to follow this guide to use lightning providers properly.
I used very simple code:
if (LightningProvider.IsLightningEnabled)
{
LowLevelDevicesController.DefaultProvider = LightningProvider.GetAggregateProvider();
}
GpioStatus = "Initializing...";
var gpio = GpioController.GetDefault();
if (gpio == null)
{
GpioStatus = "There is no GPIO controller on this device.";
}
else
{
gpio.OpenPin(1).Write(GpioPinValue.High);
GpioStatus = gpio.OpenPin(1).Read().ToString();
}
Where GpioStatus is output text on a UI.
I discovered that if I run the LowLevelDevicesController.DefaultProvider = LightningProvider.GetAggregateProvider(); line outside of the enabled check, it picks up the GPIO controller and lets me detect how many pins I have and read them (All low). However I can't change the DriveMode or write to the pins without error. The error I get just says to Make sure the LightningProviders are enabled.
This brings me back to the guide I linked at the start. It suggests to enable DMAP drivers using the Device Portal for W10IoT or DMAPUtil.exe. I have tried both. In the Device Portal the area where it should be is just blank. And in the command line trying to use the DMAPUtil.exe only returns that dmaputil.exe is not available on this system.
Therefore I am asking if there is any other way to enable the LightningProviders or if there a way to know if they are incompatible with my board?
Thanks!
UPDATE
Also tried using the devcon.exe commands in the W10IoT Command line.
I am able to locate the Direct memory access controller but when i do devcon.exe enable *PNP0200 it says it is enabled but remains disabled when checked with devcon.exe status *PNP0200
Please confirm if you have added the IOT_DMAP_DRIVER feature in your OEMInput.xml, this feature will add the DMAP driver in the image. If IOT_DMAP_DRIVER is removed from the OEMInput.xml, the Default Driver Controller will be blank in device protal, and dmaputil will be not available on Windows IoT Core. Please see the IoT Core feature list.
Update:
You can download the source of Lighting Provider, and then deploy and debug in your custom image.

The new Input System doesn't trigger anything anymore

This post is shamelessly a copy/paste from my post on the Unity Forums : https://forum.unity.com/threads/input-system-doesnt-trigger-anything-anymore.717386/, but Stack Overflow seems more active
TL;DR : InputSystem worked some days ago, don't trigger anything anymore, halp.
I tried the new Input System some days ago, and that's really neat ! I did a lot of stuff, trying to understand the best way to use it, and, in the end, I had a character jumping and moving everywhere, that was cool ! Then, I merged my code in our develop branch and went to bed.
Today, I want to continue my code, but my character doesn't move anymore, Actions are not triggered (even if inputs are detected in debugger) and I really don't know why. Either the code merge overwrote some important settings (I know what you're thinking and yes, the "Active Input Handling" is set on "Both" and I tried only running the preview) or I did something important during my little tests and I didn't realize.
So I decided to try to reproduce my steps on a fresh new project, maybe you guys can help me figure what do I do wrong ?
1/ Create a new 2D project (via the Hub)
2/ Install the latest Package (version 0.9.0)
3/ Click Yes on that message prompt to activate the new Input management in the settings
4/ Restart Unity Editor since it didn't restart even if the message said it would and check the project settings (yes, it's on "Both", and yes, my Scripting Runtime Version is 4.0)
5/ Create a new GameObject and add a PlayerInput on it
6/ Click on "Open Input Settings" and create an "InputSettings" asset
7/ Click on "Create Actions..." to create my ActionMap asset
8/ Create a "TestAction" on my "Player" ActionMap and set it to the key "t"
9/ Create a new Script "TestScript" that contains a OnTestAction() method (that only logs "test") and enables the test map/action (just to be sure) :
using UnityEngine;
using UnityEngine.InputSystem;
using UnityEngine.InputSystem.PlayerInput;
public class TestScript : MonoBehaviour
{
void Start()
{
InputActionMap playerActionMap = GetComponent<PlayerInput>().actions.GetActionMap("Player");
playerActionMap.Enable();
playerActionMap.GetAction("TestAction").Enable(); //Just to be sure
}
public void OnTestAction()
{
Debug.Log("test");
}
}
10/ Pressing "Play" and spamming "T" like a madman to try to display a debug (note that, in the debugger, a User is created, my "t" presses are detected, my TestAction exists and is mapped on the "t" key but no debug is displayed
It's probably a silly problem, but it's driving me crazy, what do I do wrong ? It's even more infuriating that it worked some days ago !
Additional information :
- Switching the Input Management from "Both" to "New Input System (preview) does nothing
- Checking in Update() is my action is enabled returns "True" every frame
- Checking in Update() is my action is triggered returns "False" every frame
- Using action.started/triggered/performed does nothing (I tried also switching to UnityEvent or C# events for this) :
public class TestScript : MonoBehaviour
{
InputAction a;
void Start()
{
InputActionMap playerActionMap = GetComponent<PlayerInput>().actions.GetActionMap("Player");
playerActionMap.Enable();
a = playerActionMap.GetAction("TestAction");
a.Enable(); //Just to be sure
a.started += OnTriggeredTestAction;
a.performed += OnTriggeredTestAction;
a.canceled += OnTriggeredTestAction;
}
public void OnTestAction()
{
Debug.Log("test");
}
public void OnTriggeredTestAction(InputAction.CallbackContext ctx)
{
Debug.Log("test triggered");
}
}
Injecting directly the InputActionReference of my TestAction and using it does nothing
Forcing "Default Control Scheme" and "Default Action Map" does nothing
Using BroadcastMessage or UnityEvents doesn't work
You probably tried to import a new input system package for multiple input devices compatibility. These types of errors are due to conflict between old and new input system packages and are probably resolved in the latest updates.
To resolve this issue, Go to Edit -> Project Settings->Player->Under Other Settings under Configuration is the option Active Input Handling. Select Both. Unity will restart. Now your problem should be solved. You will be able to use old input system packages and the new ones also simultaneously.
Check for rogue users in the input debugger
I was having very similar symptoms (Input System would randomly just stop sending callbacks). When I opened up the input debugger, it was registering the key presses, but the callbacks were never being called in my script.
Restarting Unity didn't help.
Rebooting didn't help.
I also discovered in the input debugger that there were 2 "users" in the input system and (by process of disabling Game Objects in the scene one at a time) discovered that I had accidentally attached another copy of my Input Action Asset to a different Game Object in the scene and that Unity was registering this other object as a 2nd player or "user", which was assigned all the input action bindings I was trying to capture.
The rogue Action Asset was essentially intercepting the actions, preventing the callbacks from being called on the intended script. I don't know if that's your particular issue, but maybe it will help someone else who (like me) has spent hours pouring through forums, looking for a solution to this elusive problem.
An easy way to tell if you have the same problem is to open the input debugger and see if the desired actions are actually mapped to the user of interest.
Screen clip of input debugger:
For me, there was an unexpected User #1 and only one of the users (not the intended one) actually had keys bound to the desired actions
Posting just incase others run into this issue, as this solved my problem. Make sure to call Enable() for it to start routing events.
//Create a and set the reference
private InputControls _inputMapping;
private void Awake() => _inputMapping = new InputControls();
//Route and Un-route events
private void OnEnable() => _inputMapping.Enable();
private void OnDisable() => _inputMapping.Disable();
I don't know if this will work for you but it worked for me and I was having the same issue.
I had created 2 control schemes. Mobile and Pc. Mobile required touch screen and PC required keyboard and Mouse. Doing this made my Mobile input event stop firing. So adding the Gamepad to my Mobile Control scheme allowed the events to fire again.
TLDR. Check your control scheme make sure it allows for the inputs your binding to.
I had a similar problem, reproduced with exactly the steps described in the question.
In my case, I forgot to set control schemes.
The problem was fixed after adding them.
To do so:
Open your Input Action Asset.
Select a control scheme, in the upper left corner. (say, Keyboard) (if you haven't added a control scheme to begin with, your problem may be different than mine)
Go Right Click > Edit Control Scheme.
EditControlScehme Screen Img
Click on the plus sign to add a control scheme to the list.
Add control scheme to the list Screen Img
Select the control scheme you want to add. (in this case, Keyboard)
Select control scheme Screen Img
Should look like this:
Added control scheme Screen Img
You're all set. Save everything and the problem should be fixed.
Play your game and it should work.
As of at least Unity 2020.1.2 and Input System 1.0.0 the input system will randomly stop working correctly. The only fix I'm aware of is restarting Unity.

Distinguishing between phone and tablet browsers

I know this question as been beaten to death, but I don't want anything super complicated here.
We have a companion app with our site that is only compatible with 7 and 10-inch tablets. We need to only alert users on those devices about our app. Problem is, I can't go by resolution. My Galaxy S3 has a 1280 x 720 screen, but is obviously not a tablet. I also can't for the life of me find out a way to get the physical size of the screen. The only solution I have come up with is detecting whether the device can make calls with MobileCapabilities.CanInitiateVoiceCall. Unfortuantely, by boss isn't happy with that solution.
So... How can I distinguish between a phone and a tablet in my web app (Server or client side)?
UPDATE: So far it seems that the best approach for Android is something from a blog post by the Android team: All Android phones use "Mobile" in the UserAgent string, so checking for "Mobile" *and "Android" will tell you if it's a phone, while just "Android" should be a tablet. iOS devices should be just as simple--checking for "iPhone" vs "iPad" seems to have worked so far.
I know this is a little late, but I was looking for the same thing.
Wurfl has wat you want. You can implement it easily and and even have an api you can query.
For ASP.NET application first you must place the one-off initialization.
public class Global : HttpApplication
{
public const String WurflDataFilePath = "~/App_Data/wurfl.zip";
private void Application_Start(Object sender, EventArgs e)
{
var wurflDataFile = HttpContext.Current.Server.MapPath(WurflDataFilePath);
var configurer = new InMemoryConfigurer().MainFile(wurflDataFile);
var manager = WURFLManagerBuilder.Build(configurer);
HttpContext.Current.Cache[WurflManagerCacheKey] = manager;
}
}
And then use it like this.
var device = WURFLManagerBuilder.Instance.GetDeviceForRequest(userAgent);
var isTablet = device.GetCapability("is_tablet");
var isSmartphone = device.GetCapability("is_smartphone");
For more info check ASP.NET implementation
Hope this helps anyone else looking for this.
You can try to do a user agent detection and search for the keywrords, for example, all Non tablet devices have a "Mobile Safari" key words on their user agent.

How to detect properly Windows, Linux & Mac operating systems

I could not found anything really efficient to detect correctly what platform (Windows / Linux / Mac) my C# progrma was running on, especially on Mac which returns Unix and can't hardly be differenciated with Linux platforms !
So I made something less theoretical, and more practical, based on specificities of Mac.
I'm posting the working code as an answer. Please, comment if it works well for you too / can be improved.
Thanks !
Response :
Here is the working code !
public enum Platform
{
Windows,
Linux,
Mac
}
public static Platform RunningPlatform()
{
switch (Environment.OSVersion.Platform)
{
case PlatformID.Unix:
// Well, there are chances MacOSX is reported as Unix instead of MacOSX.
// Instead of platform check, we'll do a feature checks (Mac specific root folders)
if (Directory.Exists("/Applications")
& Directory.Exists("/System")
& Directory.Exists("/Users")
& Directory.Exists("/Volumes"))
return Platform.Mac;
else
return Platform.Linux;
case PlatformID.MacOSX:
return Platform.Mac;
default:
return Platform.Windows;
}
}
Maybe check out the IsRunningOnMac method in the Pinta source:
Per the remarks on the Environment.OSVersion Property page:
The Environment.OSVersion property does not provide a reliable way to
identify the exact operating system and its version. Therefore, we do
not recommend that you use this method. Instead: To identify the
operating system platform, use the RuntimeInformation.IsOSPlatform
method.
RuntimeInformation.IsOSPlatform worked for what I needed.
if (RuntimeInformation.IsOSPlatform(OSPlatform.OSX))
{
// Your OSX code here.
}
elseif (RuntimeInformation.IsOSPlatform(OSPlatform.Linux))
{
// Your Linux code here.
}

Categories