Revit API SizeTableManager.RemoveSizeTable() not working? - c#

I'm trying to remove all Lookup Tables that start with a specific prefix inside all families.
The method "sizeTableManager.RemoveSizeTable(tableToRemove)" returns true as if it succeeded but when I go edit the families in the project and bring up the Lookup Tables list they are still there.
The transaction seems to be committing with no errors too, which is even more puzzling...
Any ideas as to what I'm doing wrong?
This is my code so far:
string existingPrefix = "ExistingPrefix_";
Document doc = this.ActiveUIDocument.Document;
FilteredElementCollector collector = new FilteredElementCollector(doc);
ICollection<Element> elements = collector.OfClass(typeof(Family)).ToElements();
foreach (var element in elements)
{
using (Transaction t = new Transaction(doc, "flush old lookup tables"))
{
t.Start();
FamilySizeTableManager sizeTableManager = FamilySizeTableManager.GetFamilySizeTableManager(doc, element.Id);
if(sizeTableManager != null)
{
foreach (var tableToRemove in sizeTableManager.GetAllSizeTableNames())
{
if(tableToRemove.StartsWith(existingPrefix))
{
bool result = sizeTableManager.RemoveSizeTable(tableToRemove);
if(result)
{
// TaskDialog.Show("Success", "Removed " + tableToRemove + " from " + element.Name);
var test = "test";
}
else
{
TaskDialog.Show("Warning", "Unable to remove " + tableToRemove + " from " + element.Name);
}
}
}
}
var commitResult = t.Commit();
}
}
Thanks in advance!

Well, turns out that the RemoveSizeTable() method was indeed working, but that I had to regenerate the document for the changes to take effect:
using (Transaction t = new Transaction(doc, "regenerate document"))
{
t.Start();
doc.Regenerate();
t.Commit();
}
Discovered this after noticing that saving the document caused the changes to take effect.

Related

Where do I have memory leaks and how to fix it? Why memory consumption increases?

I am struggling a few days already with a problem of growing memory consumption by console application in .Net Core 2.2, and just now I ran out of ideas what else I could improve.
Im my application I have a method that triggers StartUpdatingAsync method:
public MenuViewModel()
{
if (File.Exists(_logFile))
File.Delete(_logFile);
try
{
StartUpdatingAsync("basic").GetAwaiter().GetResult();
}
catch (ArgumentException aex)
{
Console.WriteLine($"Caught ArgumentException: {aex.Message}");
}
Console.ReadKey();
}
StartUpdatingAsync creates 'repo' and instance is getting from DB a list of objects to be updated (around 200k):
private async Task StartUpdatingAsync(string dataType)
{
_repo = new DataRepository();
List<SomeModel> some_list = new List<SomeModel>();
some_list = _repo.GetAllToBeUpdated();
await IterateStepsAsync(some_list, _step, dataType);
}
And now, within IterateStepsAsync we are getting updates, parsing them with existing data and updating DB. Inside of each while I was creating new instances of all new classes and lists, to be sure that old ones are releasing memory, but it didnt help. Also I was GC.Collect() at the end of the method, what also is not helping. Please note, that method below triggers lots of parralel Tasks, but they supposed to be disposed within it, am I right?:
private async Task IterateStepsAsync(List<SomeModel> some_list, int step, string dataType)
{
List<Area> areas = _repo.GetAreas();
int counter = 0;
while (counter < some_list.Count)
{
_repo = new DataRepository();
_updates = new HttpUpdates();
List<Task> tasks = new List<Task>();
List<VesselModel> vessels = new List<VesselModel>();
SemaphoreSlim throttler = new SemaphoreSlim(_degreeOfParallelism);
for (int i = counter; i < step; i++)
{
int iteration = i;
bool skip = false;
if (dataType == "basic" && (some_list[iteration].Mmsi == 0 || !some_list[iteration].Speed.HasValue)) //if could not be parsed with "full"
skip = true;
tasks.Add(Task.Run(async () =>
{
string updated= "";
await throttler.WaitAsync();
try
{
if (!skip)
{
Model model= await _updates.ScrapeSingleModelAsync(some_list[iteration].Mmsi);
while (Updating)
{
await Task.Delay(1000);
}
if (model != null)
{
lock (((ICollection)vessels).SyncRoot)
{
vessels.Add(model);
scraped = BuildData(model);
}
}
}
else
{
//do nothing
}
}
catch (Exception ex)
{
Log("Scrape error: " + ex.Message);
}
finally
{
while (Updating)
{
await Task.Delay(1000);
}
Console.WriteLine("Updates for " + counter++ + " of " + some_list.Count + scraped);
throttler.Release();
}
}));
}
try
{
await Task.WhenAll(tasks);
}
catch (Exception ex)
{
Log("Critical error: " + ex.Message);
}
finally
{
_repo.UpdateModels(vessels, dataType, counter, some_list.Count, _step);
step = step + _step;
GC.Collect();
}
}
}
Inside of the method above, we are calling _repo.UpdateModels, where is updated DB. I tryed two approaches, with using EC Core and SqlConnection. Both with similar results. Below you can find both of them.
EF Core
internal List<VesselModel> UpdateModels(List<Model> vessels, string dataType, int counter, int total, int _step)
{
for (int i = 0; i < vessels.Count; i++)
{
Console.WriteLine("Parsing " + i + " of " + vessels.Count);
Model existing = _context.Vessels.Where(v => v.id == vessels[i].Id).FirstOrDefault();
if (vessels[i].LatestActivity.HasValue)
{
existing.LatestActivity = vessels[i].LatestActivity;
}
//and similar parsing several times, as above
}
Console.WriteLine("Saving ...");
_context.SaveChanges();
return new List<Model>(_step);
}
SqlConnection
internal List<VesselModel> UpdateModels(List<Model> vessels, string dataType, int counter, int total, int _step)
{
if (vessels.Count > 0)
{
using (SqlConnection connection = GetConnection(_connectionString))
using (SqlCommand command = connection.CreateCommand())
{
connection.Open();
StringBuilder querySb = new StringBuilder();
for (int i = 0; i < vessels.Count; i++)
{
Console.WriteLine("Updating " + i + " of " + vessels.Count);
//PARSE
VesselAisUpdateModel existing = new VesselAisUpdateModel();
if (vessels[i].Id > 0)
{
//find existing
}
if (existing != null)
{
//update for basic data
querySb.Append("UPDATE dbo." + _vesselsTableName + " SET Id = '" + vessels[i].Id+ "'");
if (existing.Mmsi == 0)
{
if (vessels[i].MMSI.HasValue)
{
querySb.Append(" , MMSI = '" + vessels[i].MMSI + "'");
}
}
//and similar parsing several times, as above
querySb.Append(" WHERE Id= " + existing.Id+ "; ");
querySb.AppendLine();
}
}
try
{
Console.WriteLine("Sending SQL query to " + counter);
command.CommandTimeout = 3000;
command.CommandType = CommandType.Text;
command.CommandText = querySb.ToString();
command.ExecuteNonQuery();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
connection.Close();
}
}
}
return new List<Model>(_step);
}
Main problem is, that after tenths/hundreds of thousands of updated models my console application memory consumption increases continuously. And I have no idea why.
SOLUTION my problem was inside of ScrapeSingleModelAsync method, where I was using incorrectly HtmlAgilityPack, what I could debug thanks to cassandrad.
Your code is messy, with huge amount of different objects with unknown lifetime. It's hardly possible to figure out the problem just looking at it.
Consider using profiling tools, for example Visual Studio's Diagnostic Tools, they will help you to find what objects are living too long in the heap. Here is overview of its functions related to memory profiling. Highly recomended to be read.
In short, you need to take two snapshots and look at what objects are taking the most memory. Let's look at simple example.
int[] first = new int[10000];
Console.WriteLine(first.Length);
int[] secod = new int[9999];
Console.WriteLine(secod.Length);
Console.ReadKey();
Take the first snapshot when your function works at least once. In my case, I took snapshot when the first huge space has been alocated.
After that, let your app be working some time so the difference in memory usage become noticeable, take the second memory snapshot.
You'll notice that another snapshot is added with info about how much is the difference. To get more specific info, click on one or another blue label of the latest snapshot to open snapshots comparison.
Following my example, we can see that there is change in count of int arrays. By default int[] wasn't visible in the table, so I had to uncheck Just My Code in filtration options.
So, this is what needs to be done. After you figure out what objects increase in count or size over time, you can locate where these objects are create and optimize this operation.

Entity Framework - How To Handle Batch SaveChanges Failure

In my C# program I am using Entity Framework to synchronize a local SQL Server database with QuickBooks data. Getting the data from QuickBooks does not seem to have any issues. However I am running into a stumbling block when doing batch commits of entities.
Currently I am building up the DataContext with a configurable number of entities and then committing the entities in batch. So far the batch has not failed, but what if it does? My idea to combat this would be to iterate over the batch and submit each entity one at a time and then log the one(s) that is/are causing the commit failure.
However I do not see a way to do this with the data context since it appears to be an all or nothing matter when using SaveChanges(). Is there a way to handle what I am trying to accomplish, or should I be going about dealing with the failures in a completely different way?
Here is the code that I currently have, in case you want to take a look at it:
int itemsCount = 0;
int itemsSynced = 0;
int itemsFailed = 0;
ArrayList exceptions = new ArrayList();
int batchSliceCount = Properties.Settings.Default.SyncBatchSize; //Getting the max batch size from the settings
int index = 1; //Index used for keeping track of current batch size on data context
List<Customer> currentBatch = new List<Customer>(); // List to hold curent batch
db = new DataContext(DatabaseHelper.GetLocalDatabaseConnectionString());
foreach (var customer in QBResponse.customers)
{
itemsCount++;
try
{
string debugMsg = "Saving Customer with the Following Details....." + Environment.NewLine;
debugMsg += "ListId: " + customer.CustomerListId + Environment.NewLine;
debugMsg += "FullName: " + customer.FullName + Environment.NewLine;
int progressPercentage = (itemsCount * 100) / opResponse.retCount;
UpdateStatus(Enums.LogLevel.Debug, debugMsg, progressPercentage);
var dbCustomer = db.Customers.FirstOrDefault(x => x.CustomerListId == customer.CustomerListId);
if (dbCustomer == null)
{
// customer.CopyPropertiesFrom(customer, db);
Customer newCustomer = new Customer();
newCustomer.CopyCustomer(customer, db);
newCustomer.AddBy = Enums.OperationUser.SyncOps;
newCustomer.AddDateTime = DateTime.Now;
newCustomer.EditedBy = Enums.OperationUser.SyncOps;
newCustomer.EditedDateTime = DateTime.Now;
newCustomer.SyncStatus = true;
db.Customers.Add(newCustomer);
currentBatch.Add(newCustomer);
}
else
{
//dbCustomer.CopyPropertiesFrom(customer, db);
dbCustomer.CopyCustomer(customer, db);
dbCustomer.EditedBy = Enums.OperationUser.SyncOps;
dbCustomer.EditedDateTime = DateTime.Now;
dbCustomer.SyncStatus = true;
currentBatch.Add(dbCustomer);
}
try
{
if (index % batchSliceCount == 0 || index == opResponse.customers.Count()) //Time to submit the batch
{
UpdateStatus(Enums.LogLevel.Information, "Saving Batch of " + batchSliceCount + "Customers to Local Database");
db.SaveChanges();
itemsSynced += currentBatch.Count();
currentBatch = new List<Customer>();
db.Dispose();
db = new DataContext(DatabaseHelper.GetLocalDatabaseConnectionString());
}
}
catch (Exception ex)
{
string errorMsg = "Error occured submitting batch. Itterating and submitting one at a time. " + Environment.NewLine;
errorMsg += "Error Was: " + ex.GetBaseException().Message + Environment.NewLine + "Stack Trace: " + ex.GetBaseException().StackTrace;
UpdateStatus(Enums.LogLevel.Debug, errorMsg, progressPercentage);
//What to do here? Is there a way to properly iterate over the context and submit a change one at a time?
}
}
catch (Exception ex)
{
//Log exception and restart the data context
db.Dispose();
db = new DataContext(DatabaseHelper.GetLocalDatabaseConnectionString());
}
Thread.Sleep(Properties.Settings.Default.SynchronizationSleepTimer);
index++;
}
it depends on the exceptions you want to recover from...
If you are just looking for a way to retry if the connection was interrupted you could use a custom Execution Strategy based on DbExecutionStrategy that retries if specific errors occur as demonstrated in this CodeProject article.

C# threading parallel issue

Help is very welcome and extremely appreciated, thank you. What this program is, is a ProxyChecker, because i've bought a bunch and will continue to do so of proxies with different user/passes etc, however some have expired. I added a break point and what it's doing is actually skipping the ProxyClient code and going straight to for each var item in 1, if item accepts connection etc, it then just returns false and finishes.
private static void CheckProxy(object state)
{
var u = user[0];
var p = pass[0];
var l = new List<MyIP>();
Parallel.ForEach(l.ToArray(), (ip_item) =>
{
try
{
string ip = ip_item.IP;
using (var client = new ProxyClient(ip, u, p))
{
Console.WriteLine(ip, user, pass);
client.Connect();
ip_item.AcceptsConnection = client.IsConnected;
}
}
catch
{
l.Remove(ip_item);
}
});
foreach (var item in l)
{
if (item.AcceptsConnection == true)
{
WriteToFile(user[0], pass[0]);
}
Console.WriteLine(item.IP + " is " + (item.AcceptsConnection) + " accepts connections" + " doesn not accept connections");
}
}
Load ips function:#
private static void loadips()
{
using (TextReader tr = new StreamReader("ips.txt"))
{
var l = new List<MyIP>();
string line = null;
while ((line = tr.ReadLine()) != null)
{
l.Add(new MyIP { IP = line });
}
}
}
I have added this in response to the answer. I believe this is a variable issue as the variable is locally declared not publicly any ideas how to fix? i'm unable to find a way to get this working seems like i'm being dumb. thanks.
The problem is in these two lines:
var l = new List<MyIP>();
Parallel.ForEach(l.ToArray(), (ip_item) =>
You just created l as a new List with no items in it. Calling ToArray() will give you an empty array. When Parallel.ForEach sees an empty array, it just gets skipped since there's nothing to iterate over.

Programmatically Import Block Into AutoCAD (C#)

I'm writing a plugin for AutoCAD and want to import all the blocks it will use at the beginning to make sure that they are available when needed. To do that, I use this method
public static void ImportBlocks(string[] filesToTryToImport, string filter = "")
{
foreach (string blockToImport in filesToTryToImport)
{
if (blockToImport.Contains(filter))
{
Database sourceDb = new Database(false, true); //Temporary database to hold data for block we want to import
try
{
sourceDb.ReadDwgFile(blockToImport, System.IO.FileShare.Read, true, ""); //Read the DWG into a side database
ObjectIdCollection blockIds = new ObjectIdCollection(); // Create a variable to store the list of block identifiers
Autodesk.AutoCAD.DatabaseServices.TransactionManager tm = sourceDb.TransactionManager;
using (Transaction myT = tm.StartTransaction())
{
// Open the block table
BlockTable bt = (BlockTable)tm.GetObject(sourceDb.BlockTableId, OpenMode.ForRead, false);
// Check each block in the block table
foreach (ObjectId btrId in bt)
{
BlockTableRecord btr = (BlockTableRecord)tm.GetObject(btrId, OpenMode.ForRead, false);
// Only add named & non-layout blocks to the copy list
if (!btr.IsAnonymous && !btr.IsLayout)
{
blockIds.Add(btrId);
}
btr.Dispose();
}
}
// Copy blocks from source to destination database
IdMapping mapping = new IdMapping();
sourceDb.WblockCloneObjects(blockIds, _database.BlockTableId, mapping, DuplicateRecordCloning.Replace, false);
_editor.WriteMessage("\nCopied " + blockIds.Count.ToString() + " block definitions from " + blockToImport + " to the current drawing.");
}
catch (Autodesk.AutoCAD.Runtime.Exception ex)
{
_editor.WriteMessage("\nError during copy: " + ex.Message);
}
finally
{
sourceDb.Dispose();
}
}
}
}
That method appears to work because it successfully executes. However when I go to insert a block in the drawing via AutoCAD's interface it doesn't show up as an option and when I try to insert it programmatically it throws a FileNotFound exception meaning it didn't work. What's wrong with this method? Thanks in advance!
EDIT: Here is a less complicated method with a test method
public static void ImportSingleBlock(string fileToTryToImport)
{
using (Transaction tr = _database.TransactionManager.StartTransaction())
{
Database sourceDb = new Database(false, true); //Temporary database to hold data for block we want to import
try
{
sourceDb.ReadDwgFile(fileToTryToImport, System.IO.FileShare.Read, true, ""); //Read the DWG into a side database
_database.Insert(fileToTryToImport, sourceDb, false);
_editor.WriteMessage("\nSUCCESS: " + fileToTryToImport);
}
catch (Autodesk.AutoCAD.Runtime.Exception ex)
{
_editor.WriteMessage("\nERROR: " + fileToTryToImport);
}
finally
{
sourceDb.Dispose();
}
tr.Commit();
}
}
[CommandMethod("TESTSINGLEBLOCKIMPORTING")]
public void TestSingleBlockImporting()
{
OpenFileDialog ofd = new OpenFileDialog();
DialogResult result = ofd.ShowDialog();
if (result == DialogResult.Cancel) //Ending method on cancel
{
return;
}
string fileToTryToImport = ofd.FileName;
using (Transaction tr = _database.TransactionManager.StartTransaction())
{
EntityMethods.ImportSingleBlock(fileToTryToImport);
tr.Commit();
}
}
This file is the block I'm trying to import. Hope this inspires someone cause I am desperately lost right now.
Your code is correct and should work. In fact I did tried it and works fine. You're probably missing to Commit() a outer transaction (where you call this ImportBlocs() method). Check:
using (Transaction trans = _database.TransactionManager.StartTransaction())
{
ImportBlocks(... parameters here ...);
trans.Commit(); // remember to call this commit, if omitted, Abort() is assumed
}
I had the same issue, very similar code. Issue was that
_database.Insert(fileToTryToImport, sourceDb, false);
should be
_database.Insert(blockName, sourceDb, false);
You can see that the first parameter has to be "blockName", and not the file path.

How to optimize this code to create document libraries

This is sharepoint code, but I know c# developers would understand it.
I cant think a way right now to optimize it.
The idea is to create a document library based on the creation of an event, The name of the document library is the startdate in some format + the event title.
The problem is when the user makes many events, the same day, with the same title.
I did it with an IF for only one occurence of the duplication. But there should be another better way to do it.
The idea is to concatenate a number at the end of the doc library /1/2/3 etc.
using (SPSite oSPSite = new SPSite(SiteUrl))
{
using (SPWeb oSPWeb = oSPSite.RootWeb)
{
if (oSPWeb.Lists[DocumentLibraryName] == null)
{
Guid ID = oSPWeb.Lists.Add(DocumentLibraryName, DocumentLibraryName + System.DateTime.Now.ToString(), SPListTemplateType.DocumentLibrary);
SPList oSPList = oSPWeb.Lists[ID];
DocumentLibraryLink = oSPList.DefaultViewUrl;
oSPList.OnQuickLaunch = false;
oSPList.Update();
}
else
{
if (oSPWeb.Lists[DocumentLibraryName + "/1"] == null)
{
Guid ID = oSPWeb.Lists.Add(DocumentLibraryName + "/1", DocumentLibraryName + System.DateTime.Now.ToString(), SPListTemplateType.DocumentLibrary);
SPList oSPList = oSPWeb.Lists[ID];
DocumentLibraryName = DocumentLibraryName + "/1";
DocumentLibraryLink = oSPList.DefaultViewUrl;
oSPList.OnQuickLaunch = false;
oSPList.Update();
}
}
}
}
}
In pseudo-code:
string docLibNameBase ="myLibname";
string docLibNameTemp = docLibNameBase; //we start with the calculated title
int iCounter = 1;
//we check if the currently calculated title is OK
while (listExists(docLibNameTemp, yourWeb)) {
docLibNameTemp = docLibNameBase + "/" + iCounter.toString();
}
//this is where you create the new list using docLibNameTemp as a good title
bool listExists(string docLibName, SPWeb web){
try {
//if there is no list with such name, it will throw an exception
return (web.Lists[docLibname]!=null);
} catch{
return false;
}
}

Categories