I connect to Firebase from Unity, and get simple image from it. But I want to know how can I save this image for example in Assets folder of unity project?
void Start()
{
FirebaseStorage storage = FirebaseStorage.DefaultInstance;
Firebase.Storage.StorageReference path_reference = storage.GetReference("background/1.jpg")
"SavetoAssets = storage.GetReference("background/1.jpg")" // something like that.
}
This post is almost what you need. I just expanded it with the FileIO part using FileStream.WriteAsync and used UnityWebRequestTexture instead of WWW:
private void Start()
{
FirebaseStorage storage = FirebaseStorage.DefaultInstance;
Firebase.Storage.StorageReference path_reference = storage.GetReference("background/1.jpg");
path_reference.GetValueAsync().ContinueWith(
task =>
{
Debug.Log("Default Instance entered");
if (task.IsFaulted)
{
Debug.Log("Error retrieving data from server");
}
else if (task.IsCompleted)
{
DataSnapshot snapshot = task.Result;
string data_URL = snapshot.GetValue(true).ToString();
//Start coroutine to download image
StartCoroutine(DownloadImage(data_URL));
}
}
);
}
private IEnumerator DownloadImage(string url)
{
using (UnityWebRequest uwr = UnityWebRequestTexture.GetTexture(url))
{
yield return uwr.SendWebRequest();
if (uwr.isNetworkError || uwr.isHttpError)
{
Debug.Log(uwr.error);
}
else
{
// Instead of accessing the texture
//var texture = DownloadHandlerTexture.GetContent(uwr);
// you can directly get the bytes
var bytes = DownloadHandlerTexture.GetData(uwr);
// and write them to File e.g. using
using (FileStream fileStream = File.Open(filename, FileMode.OpenOrCreate))
{
fileStream.WriteAsync(bytes, 0, bytes.Length);
}
}
}
}
Note: I don't use Firebase, Android nor iOs so I couldn't test this
Related
i wanted to convert this image to Base64 in unity C#
But I am kind of lost, can someone help?
This is my code
using UnityEngine;
using System.Collections;
using System.IO;
using UnityEngine.UI;
using System.Collections.Generic;
using UnityEngine.Networking;
public class GetCam : MonoBehaviour
{
WebCamTexture webCam;
string your_path = "C:\\Users\\Jay\\Desktop";
public RawImage display;
public AspectRatioFitter fit;
public void Start()
{
webCam = new WebCamTexture();
webCam.Play();
if(WebCamTexture.devices.Length==0)
{
Debug.LogError("can not found any camera!");
return;
}
int index = -1;
for (int i = 0; i < WebCamTexture.devices.Length; i++)
{
if (WebCamTexture.devices[i].name.ToLower().Contains("your webcam name"))
{
Debug.LogError("WebCam Name:" + WebCamTexture.devices[i].name + " Webcam Index:" + i);
index = i;
}
}
}
public void Update()
{
float ratio = (float)webCam.width / (float)webCam.height;
fit.aspectRatio = ratio;
float ScaleY = webCam.videoVerticallyMirrored ? -1f : 1f;
display.rectTransform.localScale = new Vector3(1f, ScaleY, 1f);
int orient = -webCam.videoRotationAngle;
display.rectTransform.localEulerAngles = new Vector3(0, 0, orient);
}
public void callTakePhoto()
{
StartCoroutine(TakePhoto());
}
IEnumerator TakePhoto()
{
yield return new WaitForEndOfFrame();
Texture2D photo = new Texture2D(webCam.width, webCam.height);
photo.SetPixels(webCam.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Convert PNG to Base64
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "\\photo.png", bytes);
//lol
}
public void callpostRequest()
{
StartCoroutine(postRequest("http://127.0.0.1:5000/"));
}
IEnumerator postRequest(string url)
{
WWWForm form = new WWWForm();
form.AddField("name","hi from unity client");
// form.AddField("nice","ok");
UnityWebRequest uwr = UnityWebRequest.Post(url, form);
yield return uwr.SendWebRequest();
StartCoroutine(getRequest("http://127.0.0.1:5000/"));
if (uwr.isNetworkError)
{
Debug.Log("Error While Sending: " + uwr.error);
}
else
{
Debug.Log("Received: " + uwr.downloadHandler.text);
}
}
IEnumerator getRequest(string uri)
{
UnityWebRequest uwr = UnityWebRequest.Get(uri);
yield return uwr.SendWebRequest();
if (uwr.isNetworkError)
{
Debug.Log("Error While Sending: " + uwr.error);
}
else
{
Debug.Log("Received: " + uwr.downloadHandler.text);
}
}
}
For now all it does is click picture and save it to desktop and then below is the code of UnityWebRequest class which sends a hi message to a flask server and receives back the same
What i want to do is convert image to base64 and send to flask server, can anyone help? thank you
In general you would simply do Convert.ToBase64String(byte[])
string base64String = Convert.ToBase64String(byteArray);
and accordingly Convert.FromBase64String(string)
byte[] byteArray = Convert.FromBase64String(base64String);
Why though?
Images are "big" binary data. Base64 blows each byte up into hexadecimal chars which adds a lot of overhead.
=> Why not rather send the binary data directly?
See MultiPartFormFileSection and use the overload public static Networking.UnityWebRequest Post(string uri, List<IMultipartFormSection> multipartFormSections);
like e.g.
IEnumerator Upload(byte[] fileContent)
{
var sections = new List<IMultiPartFormSection>();
// According to your servers needs
sections.Add(new MultiPartFormFileSection("files", fileContent, "photo.png", "image/png"));
using(UnityWebRequest www = UnityWebRequest.Post("http://127.0.0.1:5000/", sections))
{
yield return www.SendWebRequest();
if (www.result != UnityWebRequest.Result.Success)
{
Debug.Log(www.error);
}
else
{
Debug.Log("Form upload complete!");
// Get the server response
// NO NEED FOR A GET REQUESTS!
var serverResponse = www.downloadHandler.text;
Debug.Log(serverResponse);
}
}
}
and for the downloading accordingly use UnityWebRequestTexture.GetTexture like
using (var uwr = UnityWebRequestTexture.GetTexture("https://www.my-server.com/myimage.png"))
{
yield return uwr.SendWebRequest();
if (uwr.result != UnityWebRequest.Result.Success)
{
Debug.Log(uwr.error);
}
else
{
// Get downloaded texture
var texture = DownloadHandlerTexture.GetContent(uwr);
...
}
}
So far, after some research from the internet I have been able to select .jpeg files from the computer and upload it to the firebase using Unity C#.
But, I cannot figure out, how I should modify the code below to use it for uploading .txt files as well.
If there is some other simpler way to do this task please tell that (if any). Otherwise, tell how I should modify this code so that it fulfills the purpose mentioned above.
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
//For Picking files
using System.IO;
using SimpleFileBrowser;
//For firebase storage
using Firebase;
using Firebase.Extensions;
using Firebase.Storage;
public class UploadFile : MonoBehaviour
{
FirebaseStorage storage;
StorageReference storageReference;
// Start is called before the first frame update
void Start()
{
FileBrowser.SetFilters(true, new FileBrowser.Filter("Images", ".jpg", ".png"), new FileBrowser.Filter("Text Files", ".txt", ".pdf"));
FileBrowser.SetDefaultFilter(".jpg");
FileBrowser.SetExcludedExtensions(".lnk", ".tmp", ".zip", ".rar", ".exe");
storage = FirebaseStorage.DefaultInstance;
storageReference = storage.GetReferenceFromUrl("gs://app_name.appspot.com/");
}
public void OnButtonClick()
{
StartCoroutine(ShowLoadDialogCoroutine());
}
IEnumerator ShowLoadDialogCoroutine()
{
yield return FileBrowser.WaitForLoadDialog(FileBrowser.PickMode.FilesAndFolders, true, null, null, "Load Files and Folders", "Load");
Debug.Log(FileBrowser.Success);
if (FileBrowser.Success)
{
// Print paths of the selected files (FileBrowser.Result) (null, if FileBrowser.Success is false)
for (int i = 0; i < FileBrowser.Result.Length; i++)
Debug.Log(FileBrowser.Result[i]);
Debug.Log("File Selected");
byte[] bytes = FileBrowserHelpers.ReadBytesFromFile(FileBrowser.Result[0]);
//Editing Metadata
var newMetadata = new MetadataChange();
newMetadata.ContentType = "image/jpeg";
//Create a reference to where the file needs to be uploaded
StorageReference uploadRef = storageReference.Child("uploads/newFile.jpeg");
Debug.Log("File upload started");
uploadRef.PutBytesAsync(bytes, newMetadata).ContinueWithOnMainThread((task) => {
if (task.IsFaulted || task.IsCanceled)
{
Debug.Log(task.Exception.ToString());
}
else
{
Debug.Log("File Uploaded Successfully!");
}
});
}
}
}
As far as I understand it is basically already working as expected but you need to handle the different file types / extensions differently.
I think all you need to actually do is
use the correct file name + extension
use the correct ContentType depending on the file extension
So maybe something like
IEnumerator ShowLoadDialogCoroutine()
{
yield return FileBrowser.WaitForLoadDialog(FileBrowser.PickMode.FilesAndFolders, true, null, null, "Load Files and Folders", "Load");
Debug.Log(FileBrowser.Success);
if (!FileBrowser.Success)
{
yield break;
}
//foreach (var file in FileBrowser.Result)
//{
// Debug.Log(file);
//}
var file = FileBrowser.Result[0];
Debug.Log("File Selected: \"{file}\"");
// e.g. C:\someFolder/someFile.txt => someFile.txt
var fileNameWithExtension = file.Split('/', '\\').Last();
if (!fileNameWithExtension.Contains('.'))
{
throw new ArgumentException($"Selected file \"{file}\" is not a supported file!");
}
// e.g. someFile.txt => txt
var extensionWithoutDot = fileNameWithExtension.Split('.').Last();
// Get MIME type according to file extension
var contentType = extensionWithoutDot switch
{
"jpg" => $"image/jpeg",
"jpeg" => $"image/jpeg",
"png" => $"image/png",
"txt" => "text/plain",
"pdf" => "application/pdf",
_ => throw new ArgumentException($"Selected file \"{file}\" of type \"{extensionWithoutDot}\" is not supported!")
};
// Use dynamic content / MIME type
var newMetadata = new MetadataChange()
{
ContentType = contentType
};
// Use the actual selected file name including extension
StorageReference uploadRef = storageReference.Child($"uploads/{fileNameWithExtension}");
Debug.Log("File upload started");
uploadRef.PutBytesAsync(bytes, newMetadata).ContinueWithOnMainThread((task) =>
{
if (task.IsFaulted || task.IsCanceled)
{
Debug.LogException(task.Exception);
}
else
{
Debug.Log($"File \"{file}\" Uploaded Successfully!");
}
});
}
you most probably will want to replace the throw by proper error handling with user feedback later ;)
I'm working on an iOS app on Unity. Eventually, the app should be able to download, import and load .obj files saved on my website server. But I'm currently developing locally so the files are saved in my laptop file system (local server side of my website).
My question is what I should use to access those files. I used WWW to access it but it seems not working. Please see my code below.
public void OnClick()
{
StartCoroutine(ImportObject());
}
IEnumerator ImportObject (){
Debug.Log("being called");
WWW www = new WWW("http://localhost:8080/src/server/uploads/user-id/file name");
Debug.Log("being called");
yield return www;
Debug.Log("NOT BEING CALLED !");
**//Everything below here seems not being called...**
if (string.IsNullOrEmpty(www.error)) {
Debug.Log("Download Error");
} else {
string write_path = Application.dataPath + "/Objects/";
System.IO.File.WriteAllBytes(write_path, www.bytes);
Debug.Log("Success!");
}
GameObject spawnedPrefab;
Mesh importedMesh = objImporter.ImportFile(Application.dataPath + "/Objects/");
spawnedPrefab = Instantiate(emptyPrefabWithMeshRenderer);
spawnedPrefab.transform.position = new Vector3(0, 0, 0);
spawnedPrefab.GetComponent<MeshFilter>().mesh = importedMesh;
}
I tried multiple solutions from the internet and finally find the correct method to download and save the file by using the code below:
IEnumerator DownloadFile(string url) {
var docName = url.Split('/').Last();
var uwr = new UnityWebRequest(url, UnityWebRequest.kHttpVerbGET);
string modelSavePath = Path.Combine(Application.dataPath, "Objects");
modelSavePath = Path.Combine(modelSavePath, docName);
//Create Directory if it does not exist
if (!Directory.Exists(Path.GetDirectoryName(modelSavePath)))
{
Directory.CreateDirectory(Path.GetDirectoryName(modelSavePath));
}
var dh = new DownloadHandlerFile(modelSavePath);
dh.removeFileOnAbort = true;
uwr.downloadHandler = dh;
yield return uwr.SendWebRequest();
if (uwr.isNetworkError || uwr.isHttpError)
Debug.LogError(uwr.error);
else
Debug.Log("File successfully downloaded and saved to " + modelSavePath);
}
I have content of video and object being created an pass into a http client web api. When ever I pass the image to the client it works find it gets to the post method, but when it comes to the video the client has trouble posting the video. I checked the video size length to make sure it meets the content length and it well under the specific ranges. The error that I receive is that the object has been disposed. If you look at the code the object is never disposed.
Here's the code on the app
public async Task<bool> AddToQueueAsync(Incident i, ContentPage page, MediaFile file)
{
HttpResponseMessage result = null;
Uri webserviceURL = i.IncidentType == IncidentType.Trooper ? trooperURL : gspURL;
var fileStream = File.Open(file.Path, FileMode.Open);
try
{
using (var client = new HttpClient())
{
using (fileStream)
{
using (var stream = new StreamContent(fileStream))
{
using (var content = new MultipartFormDataContent("----MyBoundary"))
{
if(i.MediaType == "Video")
{
content.Add(stream,"file", Guid.NewGuid().ToString() + ".mp4");
}
else
{
content.Add(stream, "file", Guid.NewGuid().ToString() + ".png");
}
content.Add(new StringContent(JsonConvert.SerializeObject(i)), "metadata");
result = await client.PostAsync(webserviceURL, content);
}
}
}
}
Here is the code on the web api:
[HttpPost]
public IHttpActionResult StarGSPDATA() {
try {
if(!Request.Content.IsMimeMultipartContent()) {
Request.CreateResponse(HttpStatusCode.UnsupportedMediaType);
}
starGSPDATAinfo suspicousInfo;
string homeDir = AppDomain.CurrentDomain.BaseDirectory;
string dir = $"{homeDir}/uploads/";
Directory.CreateDirectory(dir);
var file = HttpContext.Current.Request.Files.Count > 0 ?
HttpContext.Current.Request.Files[0] : null;
if(HttpContext.Current.Request.Form.Count > 0) {
suspicousInfo = MetaDataFromRequest(HttpContext.Current.Request.Form);
} else {
suspicousInfo = new starGSPDATAinfo();
}
if(file != null && file.ContentLength > 0) {
var fileName = Path.GetFileName(file.FileName);
var path = Path.Combine(dir, fileName);
suspicousInfo.MediaFilePath = fileName;
try {
file.SaveAs(path);
} catch(Exception e) {
Console.WriteLine($"not saving: {e.ToString()}");
}
} else {
throw new HttpResponseException(
new HttpResponseMessage(
HttpStatusCode.NoContent));
}
CleanData(suspicousInfo);
db.starGSPDATAinfoes.Add(suspicousInfo);
db.SaveChanges();
return Created("http://localhost:50641/api/StarGSPDATA/", JsonConvert.SerializeObject(suspicousInfo));
} catch(Exception e) {
return InternalServerError(e);
}
}
It works for an image but not for a video Please help thank you!
Here is a picture of the error
I'm trying to find a working sample to record videos with IOS (using xamarin) but there's always something missing or not working for me.
My best try using several forum posts and samples is the following :
using System;
using CoreGraphics;
using Foundation;
using UIKit;
using AVFoundation;
using CoreVideo;
using CoreMedia;
using CoreFoundation;
using System.IO;
using AssetsLibrary;
namespace avcaptureframes {
public partial class AppDelegate : UIApplicationDelegate {
public static UIImageView ImageView;
UIViewController vc;
AVCaptureSession session;
OutputRecorder outputRecorder;
DispatchQueue queue;
public override bool FinishedLaunching (UIApplication application, NSDictionary launchOptions)
{
ImageView = new UIImageView (new CGRect (10f, 10f, 200f, 200f));
ImageView.ContentMode = UIViewContentMode.Top;
vc = new UIViewController {
View = ImageView
};
window.RootViewController = vc;
window.MakeKeyAndVisible ();
window.BackgroundColor = UIColor.Black;
if (!SetupCaptureSession ())
window.AddSubview (new UILabel (new CGRect (20f, 20f, 200f, 60f)) {
Text = "No input device"
});
return true;
}
bool SetupCaptureSession ()
{
// configure the capture session for low resolution, change this if your code
// can cope with more data or volume
session = new AVCaptureSession {
SessionPreset = AVCaptureSession.PresetMedium
};
// create a device input and attach it to the session
var captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video);
if (captureDevice == null) {
Console.WriteLine ("No captureDevice - this won't work on the simulator, try a physical device");
return false;
}
//Configure for 15 FPS. Note use of LockForConigfuration()/UnlockForConfiguration()
NSError error = null;
captureDevice.LockForConfiguration (out error);
if (error != null) {
Console.WriteLine (error);
captureDevice.UnlockForConfiguration ();
return false;
}
if (UIDevice.CurrentDevice.CheckSystemVersion (7, 0))
captureDevice.ActiveVideoMinFrameDuration = new CMTime (1, 15);
captureDevice.UnlockForConfiguration ();
var input = AVCaptureDeviceInput.FromDevice (captureDevice);
if (input == null) {
Console.WriteLine ("No input - this won't work on the simulator, try a physical device");
return false;
}
session.AddInput (input);
// create a VideoDataOutput and add it to the sesion
var settings = new CVPixelBufferAttributes {
PixelFormatType = CVPixelFormatType.CV32BGRA
};
using (var output = new AVCaptureVideoDataOutput { WeakVideoSettings = settings.Dictionary }) {
queue = new DispatchQueue ("myQueue");
outputRecorder = new OutputRecorder ();
output.SetSampleBufferDelegate (outputRecorder, queue);
session.AddOutput (output);
}
session.StartRunning ();
return true;
}
public override void OnActivated (UIApplication application)
{
}
public class OutputRecorder : AVCaptureVideoDataOutputSampleBufferDelegate
{
AVAssetWriter writer=null;
AVAssetWriterInput writerinput= null;
CMTime lastSampleTime;
int frame=0;
NSUrl url;
public OutputRecorder()
{
string tempFile = Path.Combine(Path.GetTempPath(), "NewVideo.mp4");
if (File.Exists(tempFile)) File.Delete(tempFile);
url = NSUrl.FromFilename(tempFile);
NSError assetWriterError;
writer = new AVAssetWriter(url, AVFileType.Mpeg4, out assetWriterError);
var outputSettings = new AVVideoSettingsCompressed()
{
Height = 300,
Width = 300,
Codec = AVVideoCodec.H264,
CodecSettings = new AVVideoCodecSettings
{
AverageBitRate = 1000000
}
};
writerinput = new AVAssetWriterInput(mediaType: AVMediaType.Video, outputSettings: outputSettings);
writerinput.ExpectsMediaDataInRealTime = false;
writer.AddInput(writerinput);
}
public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
{
try
{
lastSampleTime = sampleBuffer.PresentationTimeStamp;
var image = ImageFromSampleBuffer(sampleBuffer);
if (frame == 0)
{
writer.StartWriting();
writer.StartSessionAtSourceTime(lastSampleTime);
frame = 1;
}
String infoString = "";
if (writerinput.ReadyForMoreMediaData)
{
if (!writerinput.AppendSampleBuffer(sampleBuffer))
{
infoString = "Failed to append sample buffer";
}
else
{
infoString = String.Format("{0} frames captured", frame++);
}
}
else
{
infoString = "Writer not ready";
}
Console.WriteLine(infoString);
ImageView.BeginInvokeOnMainThread(() => ImageView.Image = image);
}
catch (Exception e)
{
Console.WriteLine(e);
}
finally
{
sampleBuffer.Dispose();
}
}
UIImage ImageFromSampleBuffer (CMSampleBuffer sampleBuffer)
{
// Get the CoreVideo image
using (var pixelBuffer = sampleBuffer.GetImageBuffer () as CVPixelBuffer)
{
// Lock the base address
pixelBuffer.Lock (CVOptionFlags.None);
// Get the number of bytes per row for the pixel buffer
var baseAddress = pixelBuffer.BaseAddress;
var bytesPerRow = (int)pixelBuffer.BytesPerRow;
var width = (int)pixelBuffer.Width;
var height = (int)pixelBuffer.Height;
var flags = CGBitmapFlags.PremultipliedFirst | CGBitmapFlags.ByteOrder32Little;
// Create a CGImage on the RGB colorspace from the configured parameter above
using (var cs = CGColorSpace.CreateDeviceRGB ())
{
using (var context = new CGBitmapContext (baseAddress, width, height, 8, bytesPerRow, cs, (CGImageAlphaInfo)flags))
{
using (CGImage cgImage = context.ToImage ())
{
pixelBuffer.Unlock (CVOptionFlags.None);
return UIImage.FromImage (cgImage);
}
}
}
}
}
void TryDispose (IDisposable obj)
{
if (obj != null)
obj.Dispose ();
}
}
}
}
This works displaying live camera image and I get "frames captured" message in consol but I don't find how to record to file.
I read somewhere about adding VideoCapture but I don't know how to link with my code.
Any help will is welcome.
From your code, in the construct of class OutputRecorder you have defined the url where you want to save the recording:
string tempFile = Path.Combine(Path.GetTempPath(), "NewVideo.mp4");
if (File.Exists(tempFile)) File.Delete(tempFile);
url = NSUrl.FromFilename(tempFile);
It means you want to save the video in the tmp folder in the app's sandbox. If you want to use the video sometime later, I recommend you to change the folder to documents by using:
string filePath = Path.Combine(NSSearchPath.GetDirectories(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomain.User)[0], "NewVideo.mp4");
I notice that you have called session.StartRunning(); in the method bool SetupCaptureSession() to start recording. please add session.StopRunning(); to end recording then the video will be saved in the path we just defined above.
Moreover, you can retrieve the video with the path like:
NSData videoData = NSData.FromFile(filePath);