I have a Canon EOS 1100D camera and control it by my C# program that use it's tethering feature with Canon EOS Utility dlls.
I have to shoot one photo every 30 minute all day long, but if I want to send the camera into standby mode between two shots, the camera will be disconnected from my computer and I have to power it on by pressing a key on it.
Is there any way that I can reconnect my camera programmatically?
Note: I'm afraid of keeping the camera ON always.
After some search, it turns out that if I lost the connection between the Camera and the Computer, I can do nothing for it. So, I can't turn my Camera ON again.
What I can do is to keep my Camera always ON and prevent camera's sleep. (I can do it simply from Camera's setting menu).
And the best thing here is this fact that in DSLR Cameras you can keep it ON; even when you don't use it.
In DSLRs, the sutter just opens when shooting command is sent.
in manual use (when user use the camera, not a program or computer), user can see the view from camera's viewfinder eyepiece without using its LCD.
P.S.: Any body can help me to improve this answer? I think I couldn't explain it clearly what I want to say!
Related
I am search for a way to detect if video is playing on Windows OS (7,8,10).
SetThreadExecutionState api function does not help I have tryied with different players (VLC, BS player etc.) but it seems they don't use flag ES_DISPLAY_REQUIRED.
Checking for disabled screensaver is not good solution, because it should be allowed at first place and almost nobody use screensavers nowdays.
My app is a break timer, I am using LASTINPUTINFO() function but I want to know
when the user is watching video because there is not input (keyboard or mouse) during this time.
A dirty and partial solution would be if the app does snapshot of region in the center of the screen and comparing hashes, but it will be 90% accurate.
Any better ideas?
I want to put a scene in Unity into virtual reality using Google Cardboard.
I used to be able to just drag a CardboardMain prefab into the scene, delete the main camera, use CardboardMain as the camera position, and CardboardHead to track where the user was looking.
After reading the release notes for the new updates, I thought I could drag a GVREditorEmulator and GVRControllerMain into the scene, and keep the normal camera.
Unfortunately, I can't figure out how to get the camera to follow my character with this new setup. (In this case, a rolling ball.)
If I change the position of the normal camera, it looks like it works fine in Unity, but as soon as I upload it to my phone, the user stays in the same place, while the ball rolls away. (The user can still control the ball's movements, but the camera/user doesn't follow the ball at all.)
I had thought that the chase cam demo would be useful, but that's only for Daydream, and I'm using Cardboard.
This trick seemed to work for some people. I tried in on a previous version of Unity and a previous version of the SDK and it did not seem to work. I may just need to try it on this new version, but I'm worried about going into the released code and editing it, so I'd prefer answers that don't require this.
Is there any way I can get the user to move in a Google Cardboard scene in Unity when I upload it to my iPhone?
UPDATE:
It looks as though my main camera object is not moving, making me think that something is resetting it back to the center every time, lending some more credence to the "trick" earlier. I will now try "the trick" to see if it works.
UPDATE: It doesn't look like the lines listed in the "trick" are there anymore, and the ones that are similar in the new program don't even seem to be running. Still trying to figure out what continues to reset the main camera's position.
UPDATE: Just got a response back from Google on GitHub (or at least someone working on the SDK) that says "You just need to make the node that follows the player a parent of the camera, not the same game object as the camera." I'm not exactly sure what that means, so if someone could explain that, that would most likely solve my problem. If I figure it out on my own I'll post back here.
UPDATE: Zarko Ristic posted an answer that explained what this was, but unfortunately the tracking is still off. I found out how to get Google Cardboard to work with the old SDK and posted an answer on that. Still looking for ways to get the new SDK to follow a character properly.
You can't change positioin of camera in cardboard application, position of MainCamera it always must be 0,0,0. But you can just simply made empty GameObject and make it parent of MainCamera. In Cardboard games actualy you can move parent of camera instead MainCamera directly.
Add script for tracking ball to MainCamera parent (GameObject).
This does not answer my question, but is a solution to the problem.
Do not use the newest version of the SDK. Use version 0.6
When building the app in Unity, do not select to have VR enabled under the build settings. (VR will still be enabled in the app.) (Credit: Zarko Ristic)
When putting the app onto your phone, if XCode prompts you to change any settings, you can ignore it.
In addition, disable bitcode under "Build Settings -> Enable Bitcode -> No" (Currently, this will not allow you to put your app onto the app store. I would greatly appreciate it if anyone has information on how to get it to run without doing this.)
Now your app should run on your phone correctly.
I have been thinking about this for a while. I know we can write to our own textures with setpixels, but i also know this is a really slow method that would start dropping framerates below 30 just being there. (Due to the sync with the videocard that happens after.)
So either i am using the wrong method, or doing it wrong. But i cannot find a proper way to write my own vram to a texture or directly to a camera.
Long story short, if i were to build like an emulator inside unity, and i wanted this emulator to run on either camera pixel by pixel or just on a texture inside unity. How would i get this going without slowing my framerate to a crawl on most devices?
Are shaders an option? If so, please point me in a direction since i never made those on my own just yet.
I have a cooperative PC game, but the second player need Xbox joystick to play. Well i don't have Xbox Controller i have some other 10 buck piece of shit joystick and the game don't recognize him.
So i tried different programs to emulate any joystick to XBox joystick, but because mine is cheap the program wont recognize it too.
So i want try to create a program which Simulate XBox joystick inputs.
Then i will capture mine joystick clicks and send the Xbox simulated one. I know how to capture mine joystick clicks i have done that already.
The question is simple - How to send XBox360 Controller inputs to my PC?
If you're forced to use XBOX Controller, i assume you're using XNA
If that's the case, you could try this Wrapper:
http://sourceforge.net/projects/xnadirectinput/
I am writing picture editing windows forms application using vb.net/c#. i have a client requirement to capture the photo from digital still camera attached to computer.
how can i capture a photo from USB connected digital still camera device in my windows application ?
If you use the Windows Image Acquisition Library, you'll see events there for capturing camera new picture events. I had a similar requirement and wrote a test rig; we went down to the local camera store and tried every camera they had. The only cameras we could find that supported this functionality were the Nikon D-series cameras.
We found that with most cameras, you can't even take a picture when they are plugged in. When you plug them in to the USB port, most cameras will switch into a mode where the only thing they'll do is transfer data. The quick way to find out if a camera will work at all is to plug it into a PC, then try to snap a picture. If it lets you do that you have a chance. It also needs to support PTP.
I assume you want to activate the action of taking a picture from the computer which the camera is attached to. If that is the case then the first thing I would do is search for an API for that particular camera model. I don't believe there is a standard protocol/framework for interacting with digital cameras besides accessing the memory card within the camera.
This is depend on the interface the camera exporting. If this is standard mass storage interface you just use standard file interface, i.e you will see the camera as removable disk and can use standard Create/Read/Write/File operation.
Many new cameras have ptp (Picture transport protocol) interface. So you will need using Windows Image Acquisition API.
You might find useful following Link. If i understand correctly this is a sample code for exactly what are you looking for. Google is your friend :)
Another piece of info: many cameras will support both mass storage and ptp interfaces and it will be selectable by camera user interface. In case of automatic mode camera probably will switch to ptp interface.
Usually the camera is displayed as a removable drive when attached.
So for a Winforms application just let the user select the drive and the picture you want to upload. You can do any processing once you have the FileStream of the picture.
In ASP.net you are going to need a FileUpload Control where again the user can select the drive and picture to upload. Processing this time would be via MemoryStream on the HttpRequest.Files object.
Hope that helps.
This depends on your camera.
Many cameras will simply mount as USB mass storage devices. If this is the case, then you can just copy the file from the visible file system like you would any other file on an external disk.
If the camera doesn't make its contents available in this way, you'll need to look at the camera driver documentation to see how they recommend interacting with it.
It will depend on the brand of camera. Here is a link to start with for Canon.