I am updating some XML via code in C# and when I insert it into my database (varchar column) I am seeing a extra ?. I think this has to do with the Encoding.UTF8 I am doing. The ? is NOT there all the way through the process of calling SaveChanges() in EF core.
The database column in varchar and cannot be changed due to the way it was set up etc.
I'm not sure how I should convert the xml back to string after saving it from the dashboard (which is dev express)
internal static async Task<string> ConvertXmlToNewConnectionStrings(string xml, List<ConnectionSourceViewModel> connectionSourceViews, int currentUserId)
{
var connectionStringOptions = await GetConnectionStringOptions(currentUserId).ConfigureAwait(false);
System.Xml.Linq.XDocument doc;
using (StringReader s = new StringReader(xml))
{
doc = System.Xml.Linq.XDocument.Load(s);
}
Dashboard d = new Dashboard();
d.LoadFromXDocument(doc);
var sqlDataSources = d.DataSources.OfType<DashboardSqlDataSource>().ToList();
foreach (var item in sqlDataSources)
{
// We do not want the properties in the data source anymore... this includes user name and password etc... the new way dev ex went with this functionality
item.ConnectionParameters = null;
// get the selected connection string from the end user
var connectionStringId = connectionSourceViews.Where(x => x.SqlDataSourceComponentName == item.ComponentName).Select(x => x.SelectedConnectionStringId).FirstOrDefault();
if (string.IsNullOrWhiteSpace(connectionStringId) == false)
{
item.ConnectionName = connectionStringOptions.Where(x => x.Value == connectionStringId).Select(x => x.Text).FirstOrDefault();
}
}
MemoryStream ms = new MemoryStream();
d.SaveToXml(ms);
byte[] array = ms.ToArray();
ms.Close();
xml = Encoding.UTF8.GetString(array, 0, array.Length);
return xml;
}
I'm expecting the ? is not be added to the XML in the database column of varchar after saving it via EF Core
Image of ? at start:
https://ibb.co/LZ2QVtr
I ended up removing the code that does a MemoryStream() and went to saving the document back to the XDocument().
doc = d.SaveToXDocument();
Now everything works fine and no question mark (?) is put in the database!
Related
#MarkPflug I have a requirement to read 12 columns out of 45 - 85 total columns. This is from multiple csv files (in the hundreds). But here is the problem, a lot of the times a column or two will be missing from some csv data files. How do I check in C# for a missing column in a csv file given I use the nuget package sylvan csv reader. Here is some code:
// Create a reader
CsvDataReader reader = CsvDataReader.Create(file, new CsvDataReaderOptions { ResultSetMode = ResultSetMode.MultiResult });
// Get column by name from csv. This is where the error occurs only in the files that have missing columns. I store these and then use them in a GetString(Ordinal).
reader.GetOrdinal("HomeTeam");
reader.GetOrdinal("AwayTeam");
reader.GetOrdinal("Referee");
reader.GetOrdinal("FTHG");
reader.GetOrdinal("FTAG");
reader.GetOrdinal("Division");
// There is more data here, but anyway you get the point.
// Here I run the reader and for each piece of data I run my database write method.
while (await reader.ReadAsync())
{
await AddEntry(idCounter.ToString(), idCounter.ToString(), attendance, referee, division, date, home_team, away_team, fthg, ftag, hthg, htag, ftr, htr);
}
I tried the following:
// This still causes it to go out of bounds.
if(reader.GetOrdinal("Division") < reader.FieldCount)
// only if the ordinal exists then assign it in a temp variable
else
// skip this column (set the data in add entry method to "")
Looking at the source, it appears that GetOrdinal throws if the column name isn't found or is ambiguous. As such I expect you could do:
int blah1Ord = -1;
try{ blah1Ord = reader.GetOrdinal("blah1"); } catch { }
int blah2Ord = -1;
try{ blah2Ord = reader.GetOrdinal("blah2"); } catch { }
while (await reader.ReadAsync())
{
var x = new Whatever();
if(blah1Ord > -1) x.Blah1 = reader.GetString(blah1Ord);
if(blah2Ord > -1) x.Blah2 = reader.GetString(blah2Ord);
}
And so on, so you effectively sound out whether a column exists - the ordinal remains -1 if it doesn't - and then use that to decide whether to read the column or not
Incidentally, I've been dealing with CSVs with poor/misspelled/partial header names, and I've found myself getting the column schema and searching it for partials, like:
using var cdr = CsvDataReader.Create(sr);
var cs = await cdr.GetColumnSchemaAsync();
var sc = StringComparison.OrdinalIgnoreCase;
var blah1Ord = cs.FirstOrDefault(c => c.ColumnName.Contains("blah1", sc))?.ColumnOrdinal ?? -1;
I started using the Sylvan library and it is really powerful.
Not sure if this could help you but if you use the DataBinder.Create<T> generic method from an entity, you can do the following to get columns in your CSV file that do not map to any of the entity properties:
var dataBinderOptions = new DataBinderOptions()
{
// AllColumns is required to throw UnboundMemberException
BindingMode = DataBindingMode.AllColumns,
};
IDataBinder<TEntity> binder;
try
{
binder = DataBinder.Create<TEntity>(dataReader, dataBinderOptions);
}
catch (UnboundMemberException ex)
{
// Use ex.UnboundColumns to get unmapped columnns
readResult.ValidationProblems.Add($"Unmapped columns: {String.Join(", ", ex.UnboundColumns)}");
return;
}
Database table has stored document in varbinary.
So i can get in byte[] in C# code.
Now How can i export this byte[] JSON file field.
if (item.IS_VIDEO == 0)
{
var content = ctx.DOCUMENT_TABLE.First(a => a.document_id == item.document_id).DOCUMENT_CONTENT;
if (content != null)
{
publicationClass.document_content = System.Text.Encoding.Default.GetString(content); //for export to json field
}
}
is this a way to export byte[] file to JSON?
Have you considered letting JSON serializer deal with the problem?
byte[] file = File.ReadAllBytes("FilePath"); // replace with how you get your array of bytes
string str = JsonConvert.SerializeObject(file);
This can be then deserialized on the receiving end like this:
var xyz = JsonConvert.DeserializeObject<byte[]>(str);
This appears to be working without any issues, however there might be some size limitations that might be worth investigating before commiting to this method.
I am saving photo in SQL (varbinary(MAX) Data Type) with this code:
if (SPV1 == true)
{
imgVC1 = Image.FromFile(Open1.FileName);
imgFormat1 = picVisit1.BackgroundImage.RawFormat;
Ms1 = new MemoryStream();
imgVC1.Save(Ms1, imgFormat1);
byte[] ArrayV1 = Ms1.GetBuffer();
csCompanies.VisitCard1 = ArrayV1;
}
else
csCompanies.VisitCard1 = null;
and it continues with this code in Class:
if(VisitCard1==null)
com.Parameters.AddWithValue("#VisitCard1", Convert.ToByte(VisitCard1));
else
com.Parameters.AddWithValue("#VisitCard1", VisitCard1);
I Used "If" and "Else" for saving "Null" Value, when user not changed default photo.
Null Data saved as "0x00" in SQL.
When I want show data, i want know that the Data in SQL is Null or not, if is Null, do something, and if not, do something !
But i can't compare Data of SQL with Null value! and when i am using
if(cscompanies.Logo1==Null)
result always is (False) [that mean its not Null, Even when it Saved as Null (0x00)
Don't confuse c# null with sql server null. They are different things.
Instead of saving c# null in the database, save DBNull.Value:
if(VisitCard1==null)
com.Parameters.Add("#VisitCard1", SqlDbType.VarBinary, -1).Value = DBNull.Value;
else
com.Parameters.Add("#VisitCard1", SqlDbType.VarBinary, -1).Value = VisitCard1;
For Visit Image i used this plan:
in class :
if ((dt.Rows[0]["visitcard1"]) != DBNull.Value)
VisitCard1 = (byte[])dt.Rows[0]["VisitCard1"];
else
VisitCard1 = null;
and in Form:
if (csCompanies.Catalog5 != null)
{
byte[] Array = csCompanies.Catalog5;
MS = new MemoryStream(Array);
picCata5.BackgroundImage = Image.FromStream(MS);
}
I can query a single document from the Azure DocumentDB like this:
var response = await client.ReadDocumentAsync( documentUri );
If the document does not exist, this will throw a DocumentClientException. In my program I have a situation where the document may or may not exist. Is there any way to query for the document without using try-catch and without doing two round trips to the server, first to query for the document and second to retrieve the document should it exist?
Sadly there is no other way, either you handle the exception or you make 2 calls, if you pick the second path, here is one performance-driven way of checking for document existence:
public bool ExistsDocument(string id)
{
var client = new DocumentClient(DatabaseUri, DatabaseKey);
var collectionUri = UriFactory.CreateDocumentCollectionUri("dbName", "collectioName");
var query = client.CreateDocumentQuery<Microsoft.Azure.Documents.Document>(collectionUri, new FeedOptions() { MaxItemCount = 1 });
return query.Where(x => x.Id == id).Select(x=>x.Id).AsEnumerable().Any(); //using Linq
}
The client should be shared among all your DB-accesing methods, but I created it there to have a auto-suficient example.
The new FeedOptions () {MaxItemCount = 1} will make sure the query will be optimized for 1 result (we don't really need more).
The Select(x=>x.Id) will make sure no other data is returned, if you don't specify it and the document exists, it will query and return all it's info.
You're specifically querying for a given document, and ReadDocumentAsync will throw that DocumentClientException when it can't find the specific document (returning a 404 in the status code). This is documented here. By catching the exception (and seeing that it's a 404), you wouldn't need two round trips.
To get around dealing with this exception, you'd need to make a query instead of a discrete read, by using CreateDocumentQuery(). Then, you'll simply get a result set you can enumerate through (even if that result set is empty). For example:
var collLink = UriFactory.CreateDocumentCollectionUri(databaseId, collectionId);
var querySpec = new SqlQuerySpec { <querytext> };
var itr = client.CreateDocumentQuery(collLink, querySpec).AsDocumentQuery();
var response = await itr.ExecuteNextAsync<Document>();
foreach (var doc in response.AsEnumerable())
{
// ...
}
With this approach, you'll just get no responses. In your specific case, where you'll be adding a WHERE clause to query a specific document by its id, you'll either get zero results or one result.
With CosmosDB SDK ver. 3 it's possible. You can check if an item exists in a container and get it by using Container.ReadItemStreamAsync<T>(string id, PartitionKey key) and checking response.StatusCode:
using var response = await container.ReadItemStreamAsync(id, new PartitionKey(key));
if (response.StatusCode == HttpStatusCode.NotFound)
{
return null;
}
if (!response.IsSuccessStatusCode)
{
throw new Exception(response.ErrorMessage);
}
using var streamReader = new StreamReader(response.Content);
var content = await streamReader.ReadToEndAsync();
var item = JsonConvert.DeserializeObject(content, stateType);
This approach has a drawback, however. You need to deserialize the item by hand.
So I've been reading that I shouldn't write my own CSV reader/writer, so I've been trying to use the CsvHelper library installed via nuget. The CSV file is a grey scale image, with the number of rows being the image height and the number columns the width. I would like to read the values row-wise into a single List<string> or List<byte>.
The code I have so far is:
using CsvHelper;
public static List<string> ReadInCSV(string absolutePath)
{
IEnumerable<string> allValues;
using (TextReader fileReader = File.OpenText(absolutePath))
{
var csv = new CsvReader(fileReader);
csv.Configuration.HasHeaderRecord = false;
allValues = csv.GetRecords<string>
}
return allValues.ToList<string>();
}
But allValues.ToList<string>() is throwing a:
CsvConfigurationException was unhandled by user code
An exception of type 'CsvHelper.Configuration.CsvConfigurationException' occurred in CsvHelper.dll but was not handled in user code
Additional information: Types that inherit IEnumerable cannot be auto mapped. Did you accidentally call GetRecord or WriteRecord which acts on a single record instead of calling GetRecords or WriteRecords which acts on a list of records?
GetRecords is probably expecting my own custom class, but I'm just wanting the values as some primitive type or string. Also, I suspect the entire row is being converted to a single string, instead of each value being a separate string.
According to #Marc L's post you can try this:
public static List<string> ReadInCSV(string absolutePath) {
List<string> result = new List<string>();
string value;
using (TextReader fileReader = File.OpenText(absolutePath)) {
var csv = new CsvReader(fileReader);
csv.Configuration.HasHeaderRecord = false;
while (csv.Read()) {
for(int i=0; csv.TryGetField<string>(i, out value); i++) {
result.Add(value);
}
}
}
return result;
}
If all you need is the string values for each row in an array, you could use the parser directly.
var parser = new CsvParser( textReader );
while( true )
{
string[] row = parser.Read();
if( row == null )
{
break;
}
}
http://joshclose.github.io/CsvHelper/#reading-parsing
Update
Version 3 has support for reading and writing IEnumerable properties.
The whole point here is to read all lines of CSV and deserialize it to a collection of objects. I'm not sure why do you want to read it as a collection of strings. Generic ReadAll() would probably work the best for you in that case as stated before. This library shines when you use it for that purpose:
using System.Linq;
...
using (var reader = new StreamReader(path))
using (var csv = new CsvReader(reader))
{
var yourList = csv.GetRecords<YourClass>().ToList();
}
If you don't use ToList() - it will return a single record at a time (for better performance), please read https://joshclose.github.io/CsvHelper/examples/reading/enumerate-class-records
Please try this. This had worked for me.
TextReader reader = File.OpenText(filePath);
CsvReader csvFile = new CsvReader(reader);
csvFile.Configuration.HasHeaderRecord = true;
csvFile.Read();
var records = csvFile.GetRecords<Server>().ToList();
Server is an entity class. This is how I created.
public class Server
{
private string details_Table0_ProductName;
public string Details_Table0_ProductName
{
get
{
return details_Table0_ProductName;
}
set
{
this.details_Table0_ProductName = value;
}
}
private string details_Table0_Version;
public string Details_Table0_Version
{
get
{
return details_Table0_Version;
}
set
{
this.details_Table0_Version = value;
}
}
}
You are close. It isn't that it's trying to convert the row to a string. CsvHelper tries to map each field in the row to the properties on the type you give it, using names given in a header row. Further, it doesn't understand how to do this with IEnumerable types (which string implements) so it just throws when it's auto-mapping gets to that point in testing the type.
That is a whole lot of complication for what you're doing. If your file format is sufficiently simple, which yours appear to be--well known field format, neither escaped nor quoted delimiters--I see no reason why you need to take on the overhead of importing a library. You should be able to enumerate the values as needed with System.IO.File.ReadLines() and String.Split().
//pseudo-code...you don't need CsvHelper for this
IEnumerable<string> GetFields(string filepath)
{
foreach(string row in File.ReadLines(filepath))
{
foreach(string field in row.Split(',')) yield return field;
}
}
static void WriteCsvFile(string filename, IEnumerable<Person> people)
{
StreamWriter textWriter = File.CreateText(filename);
var csvWriter = new CsvWriter(textWriter, System.Globalization.CultureInfo.CurrentCulture);
csvWriter.WriteRecords(people);
textWriter.Close();
}