I am making a game in unity and I want to add motion blur visual effect. Which components are required to do that and is any scripting required? If yes, can anyone give an idea?
Also, I need to know how to toggle it through UI.
Go to your package manager in unity, add Post-Processing.
After that create a new post processing profile in the assets and add motionblur to that. Create a post processing volume in ur scene, set it to global and select the posyprocessing profile. Finally place a post processing layer on your camera and set the layermask to everything.
[Unity Post-Processing Documentation]https://docs.unity3d.com/560/Documentation/Manual/PostProcessing-Stack-SetUp.html
Related
I downloaded and imported the unity post processing package version 3.0.3 from Window > Package Manager.
Then I added to the Camera the Post-processing Layer component but then when I select Layer there is no postprocessing in the list :
What I'm trying to archive is a drone grayscale black and white camera effect and later to make specific objects like enemies to be kind of light colors just like a uav effect view.
This is example image of what I want to archive as the effect in the camera:
https://docs.unity3d.com/Packages/com.unity.postprocessing#3.0/manual/Quick-start.html
You have to create a custom layer just like you use for any other thing. Rest is up to PostProcessingVolume. you assign this "postprocessing" layer to the game object where PostProcessingVolume component is attached to.
The layer your camera is on needs to be set on the post-processing layer component.
I am currently using the NVIDIA FleX package in Unity3D to create soft-bodied, jelly objects. I'm using Unity for animation only, not game dev.
What I am aiming to make is a transparent, jello sphere that retains its spherical shape with elasticity.
The first way I've tried to achieve this is using Flex Array + fluid setting. I've been playing with the settings but I can't get it to remain a sphere, it just becomes a more/less viscous fluid blob.
The second way is using the Flex Soft + fluid setting. It is much better in terms of physics but even with "draw particles" off, but the water droplets are each separated and not one jelly sphere.
This is what it looks like before hitting play, where the left is with Flex Array and the right is Flex Soft. The particles for Array are visible but not for Soft.
This is after hitting play, where the Array becomes one viscous fluid, but not a sphere, and the Soft is very jello-like but the water droplets are all separated.
A solution for either of the two ways would be much appreciated!
the standard approach is to create an Nvidia Flex Controller first...
Then you should also create a Flex Soft Asset...
Then you should create or select a game object and through the Add Component tab in the game object's inspector, find the Flex Soft Actor component [see it loaded up in the image below]...
Ensure your Soft Actor Asset mentioned previously has your required mesh type selected in the inspector option [I chose sphere in the image here] and check to see it looks something like the image below to be sure...
So after that, hopefully, you can just press play and see it in action as it drops and contorts for you.
If not, I have created a quick example for you to download as a unitypackage.
It may still require further resolution with the package manager as the Flex plugin is already inside the package I'm providing here[Using Unity 2020.3.5f1]
Flex in unity package
Anyway, hope this gets you started and somewhere towards your goal with Flex.
As a bonus, I've added a small script to move the flex object as this is outside of the usual approach as we have to call to the NVidia Flex component class of choice and invoke the ApplyImpulse method.
Cheers :)
Edit: There are a small 3 set of tutorials from NV Gameworks on integrating the plugin with Unity and exampling some stuff - this "stuff" is included in my downloadable package provided above.
Here is the youtube link to the 3 set:
Nvidia Gameworks FleX tutorials on Youtube
Edit 2: rereading your question made me think I hadnt really given you the definitive answer as to using a cloth actor and having the mesh renderer deform via the flex cloth deform component.
I am providing another link to another unity package here that will show this in action also allowing you to see the game object and how the cloth component from NVIDIA Flex works with the standard mesh filter and mesh renderer. Hope this more accurately answers your question :)
Example also using Cloth Actors as well as Soft Actors in NVIDIA FleX
I'm newbie at unity animations and stuff and I want to create a game scene in which at run time I would be able to move a humanoid 3D model and save the movement as an animation (.anim) file.
I may move the parts of the model by mouse or by mapping scrolling bars to animation property values, somehow.
To do all this I decided to go for the animators (properties) of a model. Kindly look at the picture in the link below to know exactly what I am talking about. (Wasn't able to post image since I'm new to stack overflow too)
I searched the unity Scriptable API for this but wasn't able to find the documentation on how to use the above mentioned property directly in your scripts so that animation curves can be generated from them.
Is it possible to do this?
Is there a better way to record animations of humanoid models at runtime (in-game mode) or a better property for humanoid models to use?
Note: The whole animation process is to be handled through code.
I want to put a scene in Unity into virtual reality using Google Cardboard.
I used to be able to just drag a CardboardMain prefab into the scene, delete the main camera, use CardboardMain as the camera position, and CardboardHead to track where the user was looking.
After reading the release notes for the new updates, I thought I could drag a GVREditorEmulator and GVRControllerMain into the scene, and keep the normal camera.
Unfortunately, I can't figure out how to get the camera to follow my character with this new setup. (In this case, a rolling ball.)
If I change the position of the normal camera, it looks like it works fine in Unity, but as soon as I upload it to my phone, the user stays in the same place, while the ball rolls away. (The user can still control the ball's movements, but the camera/user doesn't follow the ball at all.)
I had thought that the chase cam demo would be useful, but that's only for Daydream, and I'm using Cardboard.
This trick seemed to work for some people. I tried in on a previous version of Unity and a previous version of the SDK and it did not seem to work. I may just need to try it on this new version, but I'm worried about going into the released code and editing it, so I'd prefer answers that don't require this.
Is there any way I can get the user to move in a Google Cardboard scene in Unity when I upload it to my iPhone?
UPDATE:
It looks as though my main camera object is not moving, making me think that something is resetting it back to the center every time, lending some more credence to the "trick" earlier. I will now try "the trick" to see if it works.
UPDATE: It doesn't look like the lines listed in the "trick" are there anymore, and the ones that are similar in the new program don't even seem to be running. Still trying to figure out what continues to reset the main camera's position.
UPDATE: Just got a response back from Google on GitHub (or at least someone working on the SDK) that says "You just need to make the node that follows the player a parent of the camera, not the same game object as the camera." I'm not exactly sure what that means, so if someone could explain that, that would most likely solve my problem. If I figure it out on my own I'll post back here.
UPDATE: Zarko Ristic posted an answer that explained what this was, but unfortunately the tracking is still off. I found out how to get Google Cardboard to work with the old SDK and posted an answer on that. Still looking for ways to get the new SDK to follow a character properly.
You can't change positioin of camera in cardboard application, position of MainCamera it always must be 0,0,0. But you can just simply made empty GameObject and make it parent of MainCamera. In Cardboard games actualy you can move parent of camera instead MainCamera directly.
Add script for tracking ball to MainCamera parent (GameObject).
This does not answer my question, but is a solution to the problem.
Do not use the newest version of the SDK. Use version 0.6
When building the app in Unity, do not select to have VR enabled under the build settings. (VR will still be enabled in the app.) (Credit: Zarko Ristic)
When putting the app onto your phone, if XCode prompts you to change any settings, you can ignore it.
In addition, disable bitcode under "Build Settings -> Enable Bitcode -> No" (Currently, this will not allow you to put your app onto the app store. I would greatly appreciate it if anyone has information on how to get it to run without doing this.)
Now your app should run on your phone correctly.
I want to create a simple AR application using unity3d, vuforia package and opencv. Normally in unity AR app, selected 2d target is found and virtual 3d object is projected.
I want to change these scenario.
Open AR camera
Get frame from camera
Process the frame using opencv functions(maybe opencvsharp)
Find marker and project virtual object
To do this task, I did below steps :
create a new project in unity
import vuforia package
delete main camera
add AR camera
The AR camera has 3 c# files, I open and look them. But I don't see any code open camera and get frame. The screenshot is below :
EDIT
Accepted answer helps load user defined marker.
This is the basic workflow of Vuforia using Unity.
Start from License Manager
On the License Manager page go to Add License Key
bPut in your details and if you don't plan to use a paid version in the Select Plan option click None
Now go to Target Manager
First Add a Database (if you don't have one already) and give it a Name, add License Key and click Create
Now inside your Database click Add Target and again put in all the details and upload your image.
Now Download Database and make sure you've set it's usage to Unity Editor
Now within Unity
Add your ARCamera
Import the Database you downlaoded into Unity through Assets - Import Package - Custom Package
Now in your Inspector Panel of ARCamera, you'll see Data Set Load Behaviour has your unity package name. Check it, and check Active as well
Now in Assets, Go to Qualcomm - Prefabs - ImageTarget and drag Image Target onto the scene.
In the ImageTarget Inspector you'll find ImageTarget behaviour and you can set the values to your Image.
From here on, what you do is completely up to you. You can add a Model or animate over is as you would on normal Unity applicatoins.
User Defined Targets can be found at: https://developer.vuforia.com/library/articles/Solution/Unity-Load-DataSet-from-SD-Card
Hope this helps.