Creating Large List<T> - c#

I have the following code:
var lstMusicInfo = new List<MediaFile>();
var LocalMusic = Directory.EnumerateFiles(AppSettings.Default.ComputerMusicFolder, "*.*", SearchOption.AllDirectories).AsParallel().ToList<string>();
LocalMusic = (from a in LocalMusic.AsParallel()
where a.EndsWith(".mp3") || a.EndsWith(".wma")
select a).ToList<string>();
var DeviceMusic = adb.SyncMedia(this.dev, AppSettings.Default.ComputerMusicFolder, 1);
Parallel.ForEach(LocalMusic, new Action<string>(item =>
{
try
{
UltraID3 mFile = new UltraID3();
FileInfo fInfo;
mFile.Read(item);
fInfo = new FileInfo(item);
bool onDevice = true;
if (DeviceMusic.Contains(item))
{
onDevice = false;
}
// My Problem starts here
lstMusicInfo.Add(new MediaFile()
{
Title = mFile.Title,
Album = mFile.Album,
Year = mFile.Year.ToString(),
ComDirectory = fInfo.Directory.FullName,
FileFullName = fInfo.FullName,
Artist = mFile.Artist,
OnDevice = onDevice,
PTDevice = false
});
//Ends here.
}
catch (Exception) { }
}));
this.Dispatcher.BeginInvoke(new Action(() =>
{
lstViewMusicFiles.ItemsSource = lstMusicInfo;
blkMusicStatus.Text = "";
doneLoading = true;
}));
#endregion
}));
The first part of the code gives me almost instant result containing:
Address on computer of 5780 files.
Get list of all music files on an android device compare it with those 5780 files and return a list of files found on computer but not on device (in my case it returns a list with 5118 items).
The block of code below is my problem, I am filling data into a class, then adding that class into a List<T>, doing it for 5780 times takes 60 seconds, how can I improve it?
// My Problem starts here
lstMusicInfo.Add(new MediaFile
{
Title = mFile.Title,
Album = mFile.Album,
Year = mFile.Year.ToString(),
ComDirectory = fInfo.Directory.FullName,
FileFullName = fInfo.FullName,
Artist = mFile.Artist,
OnDevice = onDevice,
PTDevice = false
});
//Ends here.
Update:
Here is the profiling result and I see it's obvious why it's slowing down >_>
I suppose I should look for a different library that reads music file information.

One way to avoid loading everything once, up front, would be to lazy load the ID3 information as necessary.
You'd construct your MediaFile instances thus...
new MediaFile(filePath)
...and MediaFile would look something like the following.
internal sealed class MediaFile
{
private readonly Lazy<UltraID3> _lazyFile;
public MediaFile(string filePath)
{
_lazyFile = new Lazy<UltraID3>(() =>
{
var file = new UltraID3();
file.Read(filePath);
return file;
});
}
public string Title
{
get { return _lazyFile.Value.Title; }
}
// ...
}
This is possibly less ideal than loading them as fast as you can in the background, if you do something like MediaFiles.OrderBy(x => x.Title).ToList() and nothing has been lazy loaded then you'll have to wait for every file to load.
Loading them in the background would make them available for use immediately after the background loading has finished. But you might have to concern yourself with not accessing some items until the background loading has finished.

You biggest bottleneck is new FileInfo(item), but you don't need FileInfo just to get the Directory and File names. You can use Path.GetDirectoryName and Path.GetFileName, which are must faster since no I/O is involved.
UltraID3 mFile = new UltraID3();
//FileInfo fInfo;
mFile.Read(item);
//fInfo = new FileInfo(item);
bool onDevice = true;
if (DeviceMusic.Contains(item))
{
onDevice = false;
}
// My Problem starts here
lstMusicInfo.Add(new MediaFile()
{
Title = mFile.Title,
Album = mFile.Album,
Year = mFile.Year.ToString(),
ComDirectory = Path.GetDirectoryName(item), // fInfo.Directory.FullName,
FileFullName = Path.GetFileName(item), //fInfo.FullName,
Artist = mFile.Artist,
OnDevice = onDevice,
PTDevice = false
});
//Ends here.

Related

How to have an AWS Lambda/Rekognition Function return an array of object keys

This feels like a simple question and I feel like I am overthinking it. I am doing an AWS project that will compare face(s) on an image to a database (s3bucket) of other faces. So far, I have a lambda function for the comparefacerequest, a class library which invokes the function, and an UWP that inputs the image file and outputs a result. It has worked so far being based on boolean (true or false) functions, but now I want it to instead return what face(s) are recognized via an array. I struggling at implementing this.
Below is my lambda function. I have adjusted the task to be an Array instead of a bool and changed the return to be an array. At the bottom, I have created a global variable class with a testing array so I could attempt to reference the array elsewhere.
public class Function
{
//Function
public async Task<Array> FunctionHandler(string input, ILambdaContext context)
{
//number of matched faces
int matched = 0;
//Client setup
var rekognitionclient = new AmazonRekognitionClient();
var s3client = new AmazonS3Client();
//Create list of target images
ListObjectsRequest list = new ListObjectsRequest
{
BucketName = "bucket2"
};
ListObjectsResponse listre = await s3client.ListObjectsAsync(list);
//loop of list
foreach (Amazon.S3.Model.S3Object obj in listre.S3Objects)
{
//face request with input and obj.key images
var comparefacesrequest = new CompareFacesRequest
{
SourceImage = new Image
{
S3Object = new S3Objects
{
Bucket = "bucket1",
Name = input
}
},
TargetImage = new Image
{
S3Object = new S3Objects
{
Bucket = "bucket2",
Name = obj.Key
}
},
};
//compare with confidence of 95 (subject to change) to current target image
var detectresponse = await rekognitionclient.CompareFacesAsync(comparefacesrequest);
detectresponse.FaceMatches.ForEach(match =>
{
ComparedFace face = match.Face;
if (match.Similarity > 95)
{
//if face detected, raise matched
matched++;
for(int i = 0; i < Globaltest.testingarray.Length; i++)
{
if (Globaltest.testingarray[i] == "test")
{
Globaltest.testingarray[i] = obj.Key;
}
}
}
});
}
//Return true or false depending on if it is matched
if (matched > 0)
{
return Globaltest.testingarray;
}
return Globaltest.testingarray;
}
}
public static class Globaltest
{
public static string[] testingarray = { "test", "test", "test" };
}
Next, is my invoke request in my class library. It has so far been based on the lambda outputting a boolean result, but I thought, "hey, it is parsing the result, it should be fine, right"? I do convert the result to a string, as there is no GetArray, from what I know.
public async Task<bool> IsFace(string filePath, string fileName)
{
await UploadS3(filePath, fileName);
AmazonLambdaClient client = new AmazonLambdaClient(accessKey, secretKey, Amazon.RegionEndpoint.USWest2);
InvokeRequest ir = new InvokeRequest();
ir.InvocationType = InvocationType.RequestResponse;
ir.FunctionName = "ImageTesting";
ir.Payload = "\"" + fileName + "\"";
var result = await client.InvokeAsync(ir);
var strResponse = Encoding.ASCII.GetString(result.Payload.ToArray());
if (bool.TryParse(strResponse, out bool result2))
{
return result2;
}
return false;
}
Finally, here is the section of my UWP where I perform the function. I am referencing the lambda client via "using Lambdaclienttest" (name of lamda project, and this is its only instance I use the reference though). When I run my project, I do still get a face detected when it should, but the Globaltest.testingarray[0] is still equal to "test".
var Facedetector = new FaceDetector(Credentials.accesskey, Credentials.secretkey);
try
{
var result = await Facedetector.IsFace(filepath, filename);
if (result)
{
textBox1.Text = "There is a face detected";
textBox2.Text = Globaltest.testingarray[0];
}
else
{
textBox1.Text = "Try Again";
}
}
catch
{
textBox1.Text = "Please use a photo";
}
Does anyone have any suggestions?

Taglib Performance issue

I want to read bulk audio file's tags faster. currently i am able to store 5000 audio files info to struct list in about 7 seconds.
issue is when i select folder having 20,000 or 40,000 files the app keeps on running and doesn't notify that the process is done. whereas when it reads 5,000 files it shows the message box prompting "Done loading files 5000" in 7 seconds.
Here is my code:
public struct SongInfoStruct
{
public string Key;
public string Value;
public string Artist;
public double Duration;
public string Comments;
public string Album;
public string Missing;
};
public async Task<SongInfoStruct> GetSongInfo(string url)
{
var songinfo = (dynamic)null;
var tagFile = TagLib.File.Create(url);
var songName = (dynamic)null;
var artist = (dynamic)null;
var album = (dynamic)null;
var comments = (dynamic)null;
var duration = (dynamic)null;
await Task.Run(() =>
{
songName = tagFile.Tag.Title;
artist = tagFile.Tag.FirstPerformer;
album = tagFile.Tag.Album;
comments = tagFile.Tag.Comment;
duration = tagFile.Properties.Duration.TotalSeconds;
});
return songinfo = new SongInfoStruct
{
Key = url,
Value = songName,
Artist = artist,
Duration = duration,
Comments = comments,
Album = album,
Missing = " "
};
}
public async Task<List<SongInfoStruct>> ReadPathFromSource(string Source)
{
var files = Directory.EnumerateFiles(Source, "*",
SearchOption.AllDirectories).Where(s => s.EndsWith(".mp3") ||
s.EndsWith(".m4a"));
int length = files.Count();
var listpaths = new List<SongInfoStruct>(length);
listpaths.Clear();
foreach (string PathsClickSong_temp in files)
{
var item = await GetSongInfo(PathsClickSong_temp);
await Task.Run(() =>
{
listpaths.Add(item);
});
}
MessageBox.Show("Done loading files "+ listpaths.Count.ToString());
return listpaths;
}
As a good practice, always try to set ConfigureAwait to false, unless you want to use same caller context.
var item = await GetSongInfo(PathsClickSong_temp).ConfigureAwait(false);
Here, you can try it using Task.WhenAll instead, which gets multiple data concurrently.
Task.Run is good to use when there is CPU intense work, for detail please have a look at When correctly use Task.Run and when just async-await
var listpaths = new List<Task<SongInfoStruct>>(length);
listpaths.Clear();
foreach (string PathsClickSong_temp in files)
{
var item = GetSongInfo(PathsClickSong_temp);
listpaths.Add(item);
}
await Task.WhenAll(listpaths);

Encoder SDK 4 - Push to Publishing Point

I'm coding an application in c# using EC4 SP2 SDK.
I want to publish my file to a media server publishing point. I've searched and found 2 examples regarding seting up and auth on publishing points, but either are from older sdk's or do not work (and are for console). basicly my application doesn't encode nothing, as if it had nothing to encode.
When in degub mode checkpont i can see the correct properties for the source file and for the server.
The encoding process takes 0secs to process. I checked the logs on the server events and i get a warning "the security system has received and auth request that could not be decoded". I just havo no knowledge to break up further than this. Any help would be appreciated.
this is the piece of code:
private void broadcastSourceFileToMediaServer2()
{
using (LiveJob job = new LiveJob())
{
String filetoencode = #"c:\temp\niceday.wmv";
LiveFileSource filesource = job.AddFileSource(filetoencode);
filesource.PlaybackMode = FileSourcePlaybackMode.Loop;
job.ActivateSource(filesource);
job.ApplyPreset(LivePresets.VC1Broadband4x3);
//don't know which one is good to use
job.AcquireCredentials += new EventHandler<AcquireCredentialsEventArgs>(job_AcquireCredentials);
_myUserName = "indes";
_pw = PullPW("indes");
Uri url = new Uri("http://192.168.1.74:8080/live");
PushBroadcastPublishFormat pubpoint = new PushBroadcastPublishFormat();
pubpoint.PublishingPoint = url;
pubpoint.UserName = _myUserName;
pubpoint.Password = _pw;
job.PublishFormats.Add(pubpoint);
job.PreConnectPublishingPoint();
job.StartEncoding();
statusBox.Text = job.NumberOfEncodedSamples.ToString();
job.StopEncoding();
job.Dispose();
}
}
public static string _myUserName { get; set; }
public static SecureString _pw { get; set; }
//codificação de Password a enviar
private static SecureString PullPW(string pw)
{
SecureString s = new SecureString();
foreach (char c in pw) s.AppendChar(c);
return s;
}
static void job_AcquireCredentials(object sender, AcquireCredentialsEventArgs e)
{
e.UserName = _myUserName;
e.Password = _pw;
e.Modes = AcquireCredentialModes.None;
}
Progresses:
I managed to authenticate (at least get a positive audit event) on the server.
I changed from this:
//don't know which one is good to use
job.AcquireCredentials += new EventHandler<AcquireCredentialsEventArgs>(job_AcquireCredentials);
_myUserName = "indes";
_pw = PullPW("indes");
Uri url = new Uri("http://192.168.1.74:8080/live");
PushBroadcastPublishFormat pubpoint = new PushBroadcastPublishFormat();
pubpoint.PublishingPoint = url;
pubpoint.UserName = _myUserName;
pubpoint.Password = _pw;
To this:
job.AcquireCredentials += new EventHandler<AcquireCredentialsEventArgs>(job_AcquireCredentials);
_myUserName = #"mediaservername\user";
_pw = PullPW("user_password");
Uri url = new Uri("http://192.168.1.74:8080/live");
PushBroadcastPublishFormat pubpoint = new PushBroadcastPublishFormat();
pubpoint.PublishingPoint = url;
If you see on one side if had to include the domain (either domain or computername) before username. this changed the failed audit events on the server, so i could eliminate the manual credentials pubpoint.username and pubpoint.Password.
Now I'm just dealing with a lack of output format exception. On to it.
How about using SMOOTH Streaming, I managed to get my project going but I didn't get much more beyond Look below, to the part that has the PUBLISH switch type. ignore the file portion
internal bool StartStream()
{
Busy = true;
// Instantiates a new job for encoding
//
//***************************************Live Stream Archive******************************
if (blnRecordFromFile)
{
// Sets up publishing format for file archival type
FileArchivePublishFormat fileOut = new FileArchivePublishFormat();
// job.ApplyPreset(LivePresets.VC1512kDSL16x9);
// Gets timestamp and edits it for filename
string timeStamp = DateTime.Now.ToString();
timeStamp = timeStamp.Replace("/", "-");
timeStamp = timeStamp.Replace(":", ".");
// Sets file path and name
string path = "C:\\output\\";
string filename = "Capture" + timeStamp + ".ismv";
if (!Directory.Exists(path))
Directory.CreateDirectory(path);
fileOut.OutputFileName = Path.Combine(path, filename);
// Adds the format to the job. You can add additional formats as well such as
// Publishing streams or broadcasting from a port
job.PublishFormats.Add(fileOut);
}
//******************************END OF Stream PORTION****************************************
////////////////////////////////////////////////////////////////////////////////////////////////////
//*************************************** Process Files or Live Stream******************************
if (blnRecordFromFile)
{
job.ApplyPreset(LivePresets.VC1IISSmoothStreaming720pWidescreen);
job = new LiveJob();
// Verifies all information is entered
if (string.IsNullOrWhiteSpace(sourcePath) || string.IsNullOrWhiteSpace(destinationPath))
return false;
job.Status += new EventHandler<EncodeStatusEventArgs>(StreamStatus);
LiveFileSource fileSource;
try
{
// Sets file to active source and checks if it is valid
fileSource = job.AddFileSource(sourcePath);
}
catch (InvalidMediaFileException)
{
return false;
}
// Sets to loop media for streaming
// fileSource.PlaybackMode = FileSourcePlaybackMode.Loop;
// Makes this file the active source. Multiple files can be added
// and cued to move to each other at their ends
job.ActivateSource(fileSource);
}
//******************************END OF FILE PORTION****************************************
// Sets up variable for fomat data
switch (publishType)
{
case Output.Archive:
// Verifies destination path exists and if not creates it
try
{
if (!Directory.Exists(destinationPath))
Directory.CreateDirectory(destinationPath);
}
catch (IOException)
{
return false;
}
FileArchivePublishFormat archiveFormat = new FileArchivePublishFormat();
// Gets the location of the old extention and removes it
string filename = Path.GetFileNameWithoutExtension(sourcePath);
// Sets the archive path and file name
archiveFormat.OutputFileName = Path.Combine(destinationPath, filename + ".ismv");
job.PublishFormats.Add(archiveFormat);
break;
case Output.Publish:
// Setups streaming of media to publishing point
job = new LiveJob();
// Aquires audio and video devices
Collection<EncoderDevice> devices = EncoderDevices.FindDevices(EncoderDeviceType.Video);
EncoderDevice video = devices.Count > 0 ? devices[0] : null;
for (int i = 0; i < devices.Count; ++i)
// devices[i].Dispose();
devices.Clear();
devices = EncoderDevices.FindDevices(EncoderDeviceType.Audio);
EncoderDevice audio = devices.Count > 0 ? devices[0] : null;
for (int i = 1; i < devices.Count; ++i)
devices[i].Dispose();
devices.Clear();
// Checks for a/v devices
if (video != null && audio != null)
{
//job.ApplyPreset(Preset.FromFile(#"C:\Tempura\LivePreset3.xml"));
job.ApplyPreset(LivePresets.H264IISSmoothStreamingLowBandwidthStandard);
job.OutputFormat.VideoProfile.SmoothStreaming = true;
deviceSource = job.AddDeviceSource(video, audio);
// Make this source the active one
job.ActivateSource(deviceSource);
}
else
{
error = true;
}
PushBroadcastPublishFormat publishFormat = new PushBroadcastPublishFormat();
try
{
// checks the path for a valid publishing point
publishFormat.PublishingPoint = new Uri(destinationPath);
}
catch (UriFormatException)
{
return false;
}
// Adds the publishing format to the job
try
{
// job.ApplyPreset(LivePresets.VC1IISSmoothStreaming480pWidescreen);
job.PublishFormats.Add(publishFormat);
job.PreConnectPublishingPoint();
}
catch (Exception e)
{
MessageBox.Show(e.StackTrace.ToString());
}
break;
default:
return false;
}
job.StartEncoding();
return true;
}
Sadly I dont have enough rep to comment, so I have to write it as an answer.
Due to you are starting a live job, in order to stream you should not call job.StopEncoding() right after StartEncoding. I think usually you would use an event to stop the encoding. If you start encoding and immediately stop it, it is only logical you have no, or only a very small output.
I changed your code to the following and it seems work well. I guess your problem is that you disposed the instance of LiveJob class. You have to keep the instance alive before it finished encoding the whole stream. So change the using part and remove the StopEncoding and Dispose will be OK.
private void broadcastSourceFileToMediaServer2()
{
LiveJob job = new LiveJob();
String filetoencode = #"c:\temp\niceday.wmv";
LiveFileSource filesource = job.AddFileSource(filetoencode);
filesource.PlaybackMode = FileSourcePlaybackMode.Loop;
job.ActivateSource(filesource);
job.ApplyPreset(LivePresets.VC1Broadband4x3);
//don't know which one is good to use
job.AcquireCredentials += new EventHandler<AcquireCredentialsEventArgs>(job_AcquireCredentials);
_myUserName = "indes";
_pw = PullPW("indes");
Uri url = new Uri("http://192.168.1.74:8080/live");
PushBroadcastPublishFormat pubpoint = new PushBroadcastPublishFormat();
pubpoint.PublishingPoint = url;
pubpoint.UserName = _myUserName;
pubpoint.Password = _pw;
job.PublishFormats.Add(pubpoint);
job.PreConnectPublishingPoint();
job.StartEncoding();
statusBox.Text = job.NumberOfEncodedSamples.ToString();
}
public static string _myUserName { get; set; }
public static SecureString _pw { get; set; }
//codificação de Password a enviar
private static SecureString PullPW(string pw)
{
SecureString s = new SecureString();
foreach (char c in pw) s.AppendChar(c);
return s;
}
static void job_AcquireCredentials(object sender, AcquireCredentialsEventArgs e)
{
e.UserName = _myUserName;
e.Password = _pw;
e.Modes = AcquireCredentialModes.None;
}

High memory Usage problem for wpf application

I made a WPF application that opens the CSV file and does some operation that includes webscraping and gets some values that has type long.(0-10000000)
Now the issue is that when large list of about 2000 is opened then memory usage for software raises above 700MB in some cases 1G.
I am shocked to see this.
some things I think is that
If each entry of csv file has long value associated with it it will take much memory.and single entry has approx 10-12 column each is long in type.now when there are huge row count then memory shoots
There are certain places in code that has a loop (on all csv rows) that creates a instance of custom class.i thought of having destructor then came to know that dot net manages memory automatically.
here goes code for loading CSV
try
{
StreamReader sr = new StreamReader(path,Encoding.Default);
labelRankCheckStatus.Dispatcher.Invoke(DispatcherPriority.Normal, new Action(delegate()
{
labelRankCheckStatus.Content = "Loading Data";
}));
string strline = "";
string[] _values = null;
int x = 0;
while (!sr.EndOfStream)
{
x++;
strline = sr.ReadLine();
_values = strline.Split(',');
if (x == 1)
{
textBoxKw1.Text = _values[12];
textBoxKw2.Text = _values[14];
textBoxKw3.Text = _values[16];
textBoxKw4.Text = _values[18];
}
else if (x != 1)
{
if (_values[0] != "")
{
Url info = new Url();
srNo++;
info.URL = idn.GetAscii(_values[0].ToString().Trim());
info.IsChecked = true;
info.TestResults = int.Parse(_values[1].Replace("%","").TrimEnd().TrimStart());
info.PageRank= int.Parse(_values[2]);
info.RelPageRank = int.Parse(_values[3].Replace("%","").TrimEnd().TrimStart());
info.Alexa= long.Parse(_values[4]);
info.RelAlexa = long.Parse(_values[5].Replace("%","").TrimEnd().TrimStart());
info.Links= long.Parse(_values[6]);
info.RelLinks = long.Parse(_values[7].Replace("%","").TrimEnd().TrimStart());
info.GIW= long.Parse(_values[8]);
info.RelGIW = long.Parse(_values[9].Replace("%","").TrimEnd().TrimStart());
info.GIN= long.Parse(_values[10]);
info.RelGIN = long.Parse(_values[11].Replace("%","").TrimEnd().TrimStart());
info.Kw1Indexed= long.Parse(_values[12]);
info.RelKw1Indexed = long.Parse(_values[13].Replace("%","").TrimEnd().TrimStart());
info.Kw2Indexed= long.Parse(_values[14]);
info.RelKw2Indexed = long.Parse(_values[15].Replace("%","").TrimEnd().TrimStart());
info.Kw3Indexed= long.Parse(_values[16]);
info.RelKw3Indexed = long.Parse(_values[17].Replace("%","").TrimEnd().TrimStart());
info.Kw4Indexed= long.Parse(_values[18]);
info.RelKw4Indexed = long.Parse(_values[19].Replace("%","").TrimEnd().TrimStart());
info.DKwIndexed= long.Parse(_values[20]);
info.RelDKwIndexed = long.Parse(_values[21].Replace("%","").TrimEnd().TrimStart());
info.Info= _values[22];
info.srNo = srNo;
url.Add(info);
}
}
dataGrid1.Dispatcher.Invoke(DispatcherPriority.Normal, new Action(delegate()
{
dataGrid1.Columns[2].Header = "URL ( " + url.Count + " )";
try
{
if (dataGrid1.ItemsSource == null)
dataGrid1.ItemsSource = url;
else
dataGrid1.Items.Refresh();
}
catch (Exception)
{
}
labelRankCheckStatus.Dispatcher.Invoke(DispatcherPriority.Normal, new Action(delegate()
{
labelRankCheckStatus.Content = "Done";
}));
}));
}
sr.Close();
labelRankCheckStatus.Dispatcher.Invoke(DispatcherPriority.Normal, new Action(delegate()
{
labelRankCheckStatus.Content = "Complete ";
}));
}
catch (Exception c)
{
MessageBox.Show(c.Message);
}`
Instead of building in-memory copies of your large objects, consider a more functional approach where you stream data in, process it and output it to your database of choice. If you need to do operations on the old data, you can use an SQL database like Sqlite.
Creating managed objects for every single entity in your system is beyond wasteful, you won't need most of them.
Of course, if you have a lot of RAM, it might simply be that the GC isn't yet bothering to collect all your garbage because the memory isn't actively needed by anything. It's more likely that you're holding references to it though.

C# webservice losing data on return

I am programming a client program that calls a webmethod but when I get the return data there are missing values on some of the fields and objects.
The webmethod in turn is calling a WCF method and in the WCF method the return data is fine. But when it is passing to the webservice the return data is missing.
Is there any way to fix this problem?
This is my client code calling the webservice:
ReLocationDoc query = new ReLocationDoc();
query.PerformerSiteId = 1;
query.PerformerUserId = 1;
query.FromStatus = 10;
query.ToStatus = 200;
ReLocationDoc doc = new ReLocationDoc();
ServiceReference1.QPSoapClient service = new QPSoapClient();
try {
service.GetRelocationAssignment(query, out doc);
string test = doc.Assignment.Id.ToString();
} catch(Exception ex) {
MessageBox.Show(ex.Message);
}
The webmethod code is here:
[WebMethod]
return m_reLocationClient.GetRelocationAssignment(query, out reLocationDoc);
}
And at last the WCF code:
public ReLocationResult GetRelocationAssignment(ReLocationDoc query, out ReLocationDoc reLocationDoc) {
try {
LOGGER.Trace("Enter GetRelocationAssignment().");
ReLocationResult result = reLocationCompactServiceClient.GetRelocationAssignment(out reLocationDoc, query);
if(reLocationDoc.Assignment == null || reLocationDoc.Assignment.CurrentStatus == STATUS_FINISHED) {
ReLocationDoc newQuery = new ReLocationDoc();
newQuery.Assignment = new AssignmentDoc();
newQuery.Assignment.EAN = DateTime.Today.ToString();
newQuery.PerformerSiteId = QPSITE;
newQuery.PerformerUserId = QPUSER;
reLocationDoc.AssignmentStatus = m_settings.ReadyStatus; ;
result = reLocationCompactServiceClient.CreateReLocationAssignment(out reLocationDoc, newQuery);
}
return result;
} finally {
LOGGER.Trace("Exit GetRelocationAssignment().");
}
}
The GetRelocationAssignment:
public ReLocationResult GetRelocationAssignment(ReLocationDoc query, out ReLocationDoc reLocationDoc) {
try {
LOGGER.Trace("Enter GetRelocationAssignment().");
ReLocationDoc doc = new ReLocationDoc();
ReLocationResult result = new ReLocationResult();
new Database(Connection).Execute(delegate(DBDataContext db) {
User user = GetVerifiedUser(db, query, MODULE_ID);
SiteModule siteModule = SiteModule.Get(db, query.PerformerSiteId, MODULE_ID);
Status status = Status.Get(db, query.FromStatus, query.ToStatus, 0);
Status startStatus = Status.Get(db, query.FromStatus, 0);
Status endStatus = Status.Get(db, query.ToStatus, 0);
IQueryable<Assignment> assignments = Assignment.GetAssignmentsWithEndStatus(db, siteModule, endStatus);
assignments = Assignment.FilterAssignmentStartStatus(assignments, startStatus);
foreach(Assignment assignment in assignments) {
LOGGER.Debug("Handling assignment: " + assignment.Id);
result.Status = true;
AssignmentDoc assignmentDoc = FillAssignmentDoc(assignment);
//ReLocationDoc doc = new ReLocationDoc();
AssignmentStatus sts = assignment.AssignmentStatus.OrderByDescending(ass => ass.Id).First();
assignmentDoc.CurrentStatus = sts.Status.Zone;
Status currentStatus = sts.Status;
IList<Item> items = assignment.Items.ToList();
IList<ItemDoc> itemDocs = new List<ItemDoc>();
foreach(Item item in items) {
ItemDoc itemDoc = FillItemDoc(item);
ItemDetail itemDetail;
if(ItemDetail.TryGet(db, item.Id, out itemDetail)) {
ItemDetailDoc itemDetailDoc = FillItemDetailDoc(itemDetail);
itemDoc.Details = new ItemDetailDoc[1];
Event eEvent = null;
if(Event.GetEvent(db, itemDetail, currentStatus, out eEvent)) {
EventDoc eventDoc = FillEventDoc(eEvent);
itemDetailDoc.Events = new EventDoc[1];
if(eEvent.LocationId.HasValue) {
Location location = null;
if(Location.TryGet(db, eEvent.LocationId.Value, out location)) {
eventDoc.Location = new LocationDoc();
eventDoc.Location = FillLocationDoc(location, db);
}
}
itemDetailDoc.Events[0] = eventDoc;
}
itemDoc.Details[0] = itemDetailDoc;
}
itemDocs.Add(itemDoc);
}
assignmentDoc.Items = itemDocs.ToArray();
doc.Assignment = assignmentDoc;
}
}, delegate(Exception e) {
result.Message = e.Message;
});
reLocationDoc = doc;
return result;
} finally {
LOGGER.Trace("Exit GetRelocationAssignment().");
}
}
In all this code the return data is fine. It is loosing data only when passing to the webmetod.
Enter code here.
Also, the ordering of the XML tags in the message makes difference - I had a similar problem about maybe two years ago, and in that case parameter values were dissappearing during transmission because the sending part ordered the tags differently than what was defined in the schema.
Make surethe XML tags are being accessed with the same casing at either end. if the casing is not the same then the value won't be read.
You should check it all message are sending back from your webservice. Call your webservice manually and check its response.
If all data is there, probably your webservice reference is outdated; update it by right-clicking your webservice reference and choose "Update"
If your data don't came back, your problem is probably related to webservice code. You should check your serialization code (if any) again, and make sure all returned types are [Serializable]. You should check if all return types are public as it's mandatory for serialization.
As noted per John Saunders, [Serializable] isn't used by XmlSerializer.

Categories