MongoDB BulkWrite ExceededTimeLimit error in .Net - c#

I'm trying to push about 150k updates into Mongo database (v 4.2.9 running on Windows, stage replica with two nodes) using BulkWrite on c# driver (v2.11.6) and looks like it is impossible. The project is .Net Framework 4.7.2.
Mongo c# driver documentation is terrible, but somehow on forums and with a lot of googling, I was finnaly able to find a way how to run about 150k updates using a batch, something like this (a little simplified for SO):
client = new MongoClient(connString);
database = client.GetDatabase(db);
// Build all the updates
List<UpdateOneModel<GroupEntry>> updates = new List<UpdateOneModel<GroupEntry>>();
foreach (GroupEntry groupEntry in stats)
{
FilterDefinition<GroupEntry> filter = Builders<GroupEntry>.Filter.Eq(e => e.Key, groupEntry.Key);
UpdateDefinitionBuilder<GroupEntry> update = Builders<GroupEntry>.Update;
var groupEntrySubUpdates = new List<UpdateDefinition<GroupEntry>>();
if (groupEntry.Value.Clicks != 0)
groupEntrySubUpdates.Add(update.Inc(u => u.Value.Clicks, groupEntry.Value.Clicks));
if (groupEntry.Value.Position != 0)
groupEntrySubUpdates.Add(update.Set(u => u.Value.Position, groupEntry.Value.Position));
UpdateOneModel<GroupEntry> groupEntryUpdate = new UpdateOneModel<GroupEntry>(filter, update.Combine(updates));
groupEntryUpdate.IsUpsert = true;
updates.Add(groupEntryUpdate);
}
// Now BulkWrite them in transaction to make sure data are consistent
IClientSessionHandle session = client.StartSession();
session.StartTransaction();
IMongoCollection<GroupEntry> collection = database.GetCollection<GroupEntry>(collectionName);
// Following line FAILS after some time
BulkWriteResult<GroupEntry> bulkWriteResult = collection.BulkWrite(session, updates);
if (!bulkWriteResult.IsAcknowledged)
throw new Exception("Mongo BulkWrite is not acknowledged!");
session.CommitTransaction();
The problem is that I keep getting the following exception:
{
"operationTime":Timestamp(1612737199,
1),
"ok":0.0,
"errmsg":"Exec error resulting in state FAILURE :: caused by :: operation was interrupted",
"code":262,
"codeName":"ExceededTimeLimit",
"$clusterTime":{
"clusterTime":Timestamp(1612737199,
1),
"signature":{
"hash":new BinData(0,
"ljcwS5Gf2JBpEu/OgPFbvRqclLw="")",
"keyId":"NumberLong(""6890288652832735234"")"
}
}
}
Does anyone have any clue? Mongo c# driver docs are completely useless. It looks like I should somehow set property $maxTimeMS, but it is not possible on BulkInsert. I have tried:
Restarts and rebuilds
Different versions of MongoDriver
Set much bigger timeouts for all "timeout" properties on MongoClient and session
Create smaller batches for BulkWrite (up to 1000 items per batch). Fails after 50-100 updates.
Spent hours and hours in useless Mongo docs and Mongo JIRA
So far no luck. The funny thing is, that the same approach works on c# driver 2.10.3 on .Net CORE 3.1 (yes, i tried) even with bigger batches (about 300k updates).
What am I missing?
EDIT:
I tried set maxCommitTime to 25 minutes based on dododo's comments like this:
IClientSessionHandle session = client.StartSession(new ClientSessionOptions()
{
DefaultTransactionOptions = new TransactionOptions(new Optional<ReadConcern>(ReadConcern.Default),
new Optional<ReadPreference>(ReadPreference.Primary),
new Optional<WriteConcern>(WriteConcern.Acknowledged),
new Optional<TimeSpan?>(TimeSpan.FromMinutes(25)))
});
It now throws exception while doing commmit: NoSuchTransaction - Transaction 1 has been aborted.. We checked MongoDB log file and found new error in there:
Aborting transaction with txnNumber 1 on session
09ea7755-7148-43e8-83d8-8bf58c211bda because it has been running for
longer than 'transactionLifetimeLimitSeconds
Based on docs, this is 60 seconds by default. So we set it to 5 minutes and now it works.
So, thank you dododo for pointing me the right direction.
Anyway, it would be really great if Mongo team described errors better and write documentation above basic CRUD operations.

As dododo suggested, this error was manifestation of server closing the transaction, because it took longer then transactionLifetimeLimitSeconds, which is 60 seconds by default. So two things needs to be done:
Set parameter transactionLifetimeLimitSeconds to more than 60 seconds
Set maxCommitTime to higher value. I'm unable to find default value, so I set it to 10 minutes (same as transactionLifetimeLimitSeconds). Set it while starting a session (see the question).
Anyway documentation for this is missing and the error itself was misleading. So I hope it helps anyone who will have to deal with with this.

Related

darksky api: TLS requirements changed, library no longer works

I've been using this C# library wrapper for the darksky API:
https://github.com/amweiss/dark-sky-core
In my implementation I poll once every 3 minutes to get the forecast, which I use in my home thermostat network:
async void GetForecast()
{
// https://darksky.net/dev/docs#forecast-request
float Temp, DewPoint, WindSpeed, WindChill, Humidity, HeatIndex;
var client = new DarkSkyService("user-api-key");
try
{
Forecast fc = await client.GetWeatherDataAsync(38.329444, -87.412778);
Temp = (float)Math.Floor(fc.Currently.Temperature);
PublishTemp(Temp);
// for database, get temp, dewpoint, calculate windchill, calculate heatindex
DewPoint = (float)fc.Currently.DewPoint;
WindSpeed = (float)fc.Currently.WindSpeed;
Humidity = (float)fc.Currently.Humidity; // range: 0-1
WindChill = (float)CalculateWindChill(Temp, WindSpeed);
HeatIndex = (float)CalculateHeatIndex(Temp, Humidity);
SaveToDatabase(Temp, DewPoint, WindChill, HeatIndex);
RxForecast = true;
if (DateTime.Now.Hour != LastForecastHour)
{
LatestForecast = fc;
LastForecastHour = DateTime.Now.Hour;
PublishForecasts();
}
}
catch (Exception s) {
RxForecast = false;
}
ForecastWaitTime = RxForecast ? FAST_FORECAST_CYCLE : SLOW_FORECAST_CYCLE;
}
This has worked fine for about 4 months before it abruptly stopped working a week ago. Darksky support said that they have recently implemented security updates and no longer support most common TLS ciphers (quoting):
- TLS 1.0
- TLS 1.1
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_128_CBC_SHA256
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_256_CBC_SHA256
- TLS_RSA_WITH_AES_256_CBC_SHA
You can definitively determine whether your app works with the new SSL permissions by testing against
https://api.darksky.net:4433/. If you decide to update SSL on your end, you can test the API by sending a request here: https://api.darksky.net:4433/v1/status.txt.
Note that we will be making additional security-related updates in the coming weeks so there will be more changes in the near future. We don't have a notification system for alerting users to changes made on our backend but we do offer a feed for our status page, which often includes information about updates that have been or will be made (https://status.darksky.net/). We'll do our very best to make sure we communicate them as we're able to. Additionally, to avoid future disruptions we strongly recommend switching to one of the following, which should carry you through any of the additional security updates that will be applied in the near future:
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
I have no idea what changes I need to make to this code to 'update TLS', and I can't seem to get any more information from darksky. In the meantime, my alarm system is at a standstill.
One thing I don't understand is that, if I type this URL in a browser:
https://api.darksky.net/forecast/my-api-key/38.329444, -87.412778
It works fine, and immediately returns a huge JSON forecast string. Trying this with HttpWebRequest, HttpClient, or WebClient, in code results in different "errors occurred" exceptions. Overall, I'd rather use the library for the returned Forecast object that is easy to interpret.
Is this TLS update something I do at the system level, outside the devlopment environment?
Or, are there any alternatives to darksky that I could switch to?
You have two options:
1: update the library you are using and recompile. This issue was reported on its github page:
https://github.com/jcheng31/DarkSkyApi/issues/28
2: It's a bit of work but you could move the forecast module to Linux/Raspberry Pi, where TLS12 is already configured. You will have to rewrite the routine in Python to do this. I verified this approach would work on my own PI network.

Fastest way to retrieve filtered remote event logs

I need to retrieve a few event logs (with specific IDs) from the Security event log from a handful of servers.
I've parallelized the server loop and it works fine and speeds up, but a couple of them have -huge- retention strategies (exported EVTX files for full registries are over 10gb).
I get the logs using a loop like this:
var eventIds = new [] { 1, 2, 3, 4 }
var eventIdQueryStr = string.Join(" or ", eventIds.Select(x => $"EventID={x}"));
var queryXPath = $"*[System[TimeCreated[#SystemTime >= '{dateFrom:s}' and #SystemTime < '{dateTo:s}']]] and *[System[{eventIdQueryStr}]]";
using var session = new EventLogSession(
serverName,
domain,
user,
password,
SessionAuthentication.Default);
var eventsQuery = new EventLogQuery("Security", PathType.LogName, queryXPath) {Session = session};
using (var logReader = new EventLogReader(eventsQuery))
{
for (var eventDetail = logReader.ReadEvent();
eventDetail != null;
eventDetail = logReader.ReadEvent())
{
// event list is a `ConcurrentBag<EventRecord>`
_eventList.Add(eventDetail);
/* other irrelevant stuff for showing progress, not relevant to the speed of the process */
}
}
/* Parsing of the eventList here... irrelevant to the question */
This works, but it's extremely slow. I get around 1 million records (filtered with the xpath query) per hour for each server (not counting the parsing, that's why it's not relevant here) over a VPN.
I understand the event log on Windows is not an indexed database so the query doesn't really speed up things (just filters them), but this is the fastest I could get (with several different techniques, including dumping the whole registry on the remote server, copying the file and/or parsing it directly via the network).
Is there anything I'm missing which could speed it up?
I've tested this on both .NET Core 3.0 and .NET Framework 4.8 with no difference on results whatsoever. Using the wevtutil command line with the same xpath query (using /q:"<query>") give similar performance but I can't get over my head that this is the fastest that can be done.
The servers with big retention in particular are dual xeon servers with LOTS of free CPU and RAM, so I'm quite positive this is not a hardware performance problem.
I'd be grateful for any tips
PS: when I wrote the "progress-showing" parts are irrelevant is because I've tried removing them (just in case outputting progress was the culprit) with no relevant differences to the process speed

Connecting to mongodb sing C# quick tour not creating db or collection

I'm going through the mongoDB Driver Documentation Quick Tour for the first time. Specifically the 2.4 version.
I've created a fresh mongodb instance at the 192.168.1.50 address, and it appears to be running correctly.
The MongoDB documentation gives the following example:
var client = new MongoClient("mongodb://192.168.1.50:27017");
#It's ok if the database doesn't yet exist. It will be created upon first use
var database = client.GetDatabase("testDB");
#It’s ok if the collection doesn’t yet exist. It will be created upon first use.
var collection = database.GetCollection<BsonDocument>("testCollection");
However, when I go on my server, and I enter the mongo console
mongo
And I list the databases using
show dbs
The output is only
admin 0.000GB
local 0.000GB
Is there anything else I should have done to make this work? I'm getting no errors on try/catch, and it appears to be running fine.
Troubleshooting
So far I've confirmed that mongodb is running by using the following:
netstat -plntu
Shows mongod running on 27017 in the LISTEN state.
I'd also be interested in knowing if there's a way on the mongodb server to view live connections, so I could see if it were actually successfully connecting.
Well the problem is that you need to create almost one collection in order to persist the created database (weird right?) i tested it with robomongo and works in that way.
The problem is that GetCollection method is not creating the target collection, you can try with this code:
static void Main(string[] args)
{
var client = new MongoClient("mongodb://192.168.1.50:27017");
//# It's ok if the database doesn't yet exist. It will be created upon first use
var database = client.GetDatabase("test");
//# It’s ok if the collection doesn’t yet exist. It will be created upon first use.
string targetCollection = "testCollection";
bool alreadyExists = database.ListCollections().ToList().Any(x => x.GetElement("name").Value.ToString() == targetCollection);
if (!alreadyExists)
{
database.CreateCollection(targetCollection);
}
var collection = database.GetCollection<BsonDocument>(targetCollection);
}
It turns out that a method I had found on how to set multiple bindIp's was incorrect. The problem wasn't with the C# at all. I found the solution here
In case that ever goes away, here's the current settings I had to follow for multiple ip's
edit file /etc/mongod.conf
Wrap the comma-separated-Ips with brackets
bindIp = [127.0.0.1, 192.168.184.155, 96.88.169.145]
My original code worked fine, I just didn't have the brackets on the bindIp.

Devart ChangeConflictException but values still written to database

I have an intermittent Devart.Data.Linq.ChangeConflictException: Row not found or changed raising it's ugly head. The funny thing is, the change is still written to the database!
The stack trace says:
Devart.Data.Linq.ChangeConflictException: Row not found or changed.
at Devart.Data.Linq.Engine.b4.a(IObjectEntry[] A_0, ConflictMode A_1, a A_2)
at Devart.Data.Linq.Engine.b4.a(ConflictMode A_0)
at Devart.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
at Devart.Data.Linq.DataContext.SubmitChanges()
at Billing.Eway.EwayInternal.SuccessCustomerRenewal(String username, Bill bill, EwayTransaction transaction) in c:\Users\Ian\Source\Repos\billing-class-library\Billing\Billing\Eway\EwayInternal.cs:line 552
at Billing.Eway.Eway.BillAllUsers() in c:\Users\Ian\Source\Repos\billing-class-library\Billing\Billing\Eway\Eway.cs:line 138
And my code for Billing.Eway.EwayInternal.SuccessCustomerRenewal:
internal static void SuccessCustomerRenewal(string username, Bill bill, EwayTransaction transaction)
{
// Give them their points!
ApplyBillToCustomerAccount(username, bill, true);
BillingEmail.SendRenewalSuccessEmail(username, bill, transaction);
using (MsSqlDataClassesDataContext msSqlDb = new MsSqlDataClassesDataContext())
{
// TODO: Remove this logging
msSqlDb.Log = new StreamWriter(#"logs\db\" + Common.GetCurrentTimeStamp() + "-MsSQL.txt", true) { AutoFlush = true };
EwayCustomer ewayCustomer = msSqlDb.EwayCustomers.First(c => c.Username == username);
ewayCustomer.NextBillingDate = Common.GetPlanExpiry(bill.BillPlan);
using (MySqlDataContext mySqlDb = new MySqlDataContext())
{
// TODO: Remove this logging
mySqlDb.Log = new StreamWriter(#"logs\db\" + Common.GetCurrentTimeStamp() + "-MySQL.txt", true) { AutoFlush = true };
BillingMySqlContext.Customer grasCustomer = mySqlDb.Customers.First(c => c.Username == username);
// Extend their membership date out so that the plan doesn't expire because of a failed credit card charge.
grasCustomer.MembershipDate =
ewayCustomer.NextBillingDate.AddDays(1);
mySqlDb.SubmitChanges(); // <-- This is line 552
}
msSqlDb.SubmitChanges();
}
}
I know that the issue occurs on the mySqlDb.SubmitChanges() line, since that DB context is the one using Devart (Linq solution for MySQL databases): the other context uses pure MS Linq.
Not only is the change written to the MySql DB (inner using block), but it is also written to the MsSql DB (outer using block). But that's where the magical success ends.
If I could I would write a Minimal, Complete and Verifiable example, but strangely I'm unable to generate a Devart ChangeConflictException.
So, why does the change get saved to the database after a Devart.Data.Linq.ChangeConflictException? When I previously encountered System.Data.Linq.ChangeConflictException changes weren't saved.
Edit 1:
I've also now included the .PDB file and gotten line number confirmation of the exact source of the exception.
Edit 2:
I now understand why I can't generate a ChangeConflictException, so how is it happening here?
These are the attributes for MembershipDate:_
[Column(Name = #"Membership_Date", Storage = "_MembershipDate", CanBeNull = false, DbType = "DATETIME NOT NULL", UpdateCheck = UpdateCheck.Never)]
I know I can explicitly force my changes through to override any potential conflict, but that seems undesirable (I don't know what I would be overriding!). Similarly I could wrap the submit in a try block, and retry (re-reading each time) until success, but that seems clunky. How should I deal with this intermittent issue?
Edit 3:
It's not caused by multiple calls. This function is called in one place, by a single-instance app. It creates log entries every time it is run, and they are only getting created once. I have since moved the email call to the top of the method: the email only gets sent once, the exception occurs, and database changes are still made.
I believe it has something to do with the using blocks. Whilst stepping through the debugger on an unrelated issue, I entered the using block, but stopped execution before the SubmitChanges() call. And the changes were still written to the database. My understanding was that using blocks were to ensure resources were cleaned up (connections closed, etc), but it seems that the entire block is being executed. A new avenue to research...
But it still doesn't answer how a ChangeConflictException is even possible given Devart explicitly ignores them.
Edit 4:
So I wasn't going crazy, the database change did get submitted even after I ended execution in the middle of the using block, but it only works for websites.
Edit 5:
As per #Evk's suggestion I've included some DB logging (and updated the stacktrace and code snippet above). The incidence rate of this exception seems to have dropped, as it has only just happened since I implemented the logging. Here are the additional details:
Outer (MS SQL) logfile:
SELECT TOP (1) [t0].[id], [t0].[Username], [t0].[TokenId], [t0].[PlanId], [t0].[SignupDate], [t0].[NextBillingDate], [t0].[PaymentType], [t0].[RetryCount], [t0].[AccountStatus], [t0].[CancelDate]
FROM [dbo].[EwayCustomer] AS [t0]
WHERE [t0].[Username] = #p0
-- #p0: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [dyonis]
-- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.18408a
(It just shows the SELECT call (.First()), none of the updates show).
Inner (MySQL) logfile:
SELECT t1.Customer_ID, t1.Username, t1.Account_Group, t1.Account_Password, t1.First_Name, t1.Last_Name, t1.Account_Type, t1.Points, t1.PromoPoints, t1.Phone, t1.Cell, t1.Email, t1.Address1, t1.Address2, t1.City, t1.State, t1.Country, t1.Postcode, t1.Membership_Group, t1.Suspend_On_Zero_Points, t1.Yahoo_ID, t1.MSN_ID, t1.Skype_ID, t1.Repurchase_Thresh, t1.Active, t1.Delete_Account, t1.Last_Activity, t1.Membership_Expires_After_x_Days, t1.Membership_Date, t1.auth_name, t1.created_by, t1.created_on, t1.AccountGroup_Points_Used, t1.AccountGroup_Points_Threashold, t1.LegacyPoints, t1.Can_Make_Reservation, t1.Gallery_Access, t1.Blog_Access, t1.Private_FTP, t1.Photometrica, t1.Promo_Code, t1.Promo_Expire_DTime, t1.Gift_FirstName, t1.Gift_LastName, t1.Gift_Email, t1.Gift_Phone, t1.Gift_Active, t1.NoMarketingEmail, t1.Can_Schedule, t1.Refered_By, t1.Q1_Hear_About_Us, t1.Q2_Exp_Level, t1.Q3_Intrests, t1.GIS_DTime_UTC, t1.Membership_Expire_Notice_Sent, t1.Promo_Expire_Notice_Sent, t1.isEncrypted, t1.PlanId
FROM grasbill.customers t1
WHERE t1.Username = :p0 LIMIT 1
-- p0: Input VarChar (Size = 6; DbType = AnsiString) [dyonis]
-- Context: Devart.Data.MySql.Linq.Provider.MySqlDataProvider Mapping: AttributeMappingSource Build: 4.4.519.0
UPDATE grasbill.customers SET Membership_Date = :p1 WHERE Customer_ID = :key1
-- p1: Input DateTime (Size = 0; DbType = DateTime) [8/3/2016 4:42:53 AM]
-- key1: Input Int (Size = 0; DbType = Int32) [7731]
-- Context: Devart.Data.MySql.Linq.Provider.MySqlDataProvider Mapping: AttributeMappingSource Build: 4.4.519.0
(Shows the SELECT and UPDATE calls)
So the log files don't really give any clue as to what's happening, but again the MS SQL database has been updated! The NextBillingDate field has been set correctly, as per this line:
ewayCustomer.NextBillingDate = Common.GetPlanExpiry(bill.BillPlan);
If it hadn't been updated, the user would have been billed again on the next timer tick (5 mins later), and I can see from logging that didn't happen.
One other interesting thing to note is the log file timestamps. As you can see from the code above I grab the current (UTC) time for the log filename. Here is the information shown by Windows File Explorer:
The MS SQL logfile was created at 04:42 (UTC) and last modified at 14:42 (UTC+10, Windows local-time), but the MySQL logfile was last modified at 15:23 (UTC+10), 41 minutes after it was created. Now I assume the logfile StreamWriter is closed as soon as it leaves scope. Is this delay an expected side effect of the exception? Did it take 41 minutes for the garbage collector to realise I no longer needed a reference to the StreamWriter? Or is something else going on?
Well 6 months later I finally got to the bottom of this problem. Not sure if it will ever help anyone else, but I'll detail it anyway.
There were 2 problems in play here, and 1 of them was idiocy (as they usually are), but one was legitimately something I did not know or expect.
Problem 1
The reason the changes were magically made to the database even though there was an exception was because the very first line of code in that function ApplyBillToCustomerAccount(username, bill, true); updates the database! <facepalm>
Problem 2
The (Devart) ChangeConflictException isn't only thrown if the data has changed, but also if you're not making any changes. MS SQL stores DateTimes with great precision, but MySQL (or the one I'm running at least) only stores down to seconds. And here's where the intermittency came in. If my database calls were quick enough, or just near the second boundary, they both got rounded to the same time. Devart saw no changes to be written, and threw a ChangeConflictException.
I recently made some optimisations to the database which resulted in far greater responsiveness, and massively increased incidence of this exception. That was one of the clues.
Also I tried changing the Found Rows parameter to true as instructed in the linked Devart post but found it did not help in my case. Or perhaps I did it wrong. Either way now that I've found the source of the issue I can eliminate the duplicate database updates.

Rotativa ActionAsPdf() Very Slow

Using Rotativa 1.6.4 from NuGet and have noticed the following issue using the code below.
ActionAsPdf hangs randomly for indeterminate amount of time.
Code below that is hanging:
var pdfResult = new ActionAsPdf("Report", new {id = Request.Params["id"]})
{
Cookies = cookieCollection,
FormsAuthenticationCookieName = FormsAuthentication.FormsCookieName,
CustomSwitches = "--load-error-handling ignore"
};
Background info that may help:
The customSwitches is in use to ignore a documented issue calling wkhtmltopdf.exe using the ActionAsPdf, but it does not suppress errors in the code only in the wkhtmltopdf call.
Observations, usage and testing:
It works but when running the application (whether or not stepping through code), it can be anywhere from 10 seconds up to about 4 minutes between hitting the pdfResult = new ActionAsPdf and finally entering into the "Report" action being called. Can't discern anything actually happening in the output window of Visual Studio, no errors are being thrown that I have found. Just random slow transition into the Reports() action.
I can run the Reports() action directly via URL and it never slows like this and is quite fast for PDF generation. I am running it using the ActionAsPdf to obtain the binary to save to file system and send via email, which is the prescribed method of doing so for this library.
The behavior exists on both a local Windows 10 dev box and a remote Server 2008R2 Test box. .Net 4.5.1 on both boxes, default IIS on each.
Questions I have:
Any idea on what might cause this slow down and how to remedy it?
I ended up using UrlAsPdf() instead of ActionAsPdf() and it works. Seems there may be some issues with the ActionAsPdf() and I have filed a bug with Rotative project on GitHub. The ActionAsPdf() is still marked as beta, so hopefully it get's fixed in future versions or by the community.
In my case, I had to do few more tweaks along with using UrlAsPdf(). I have narrowed down the issue to the cookie collection that I was adding. So I tried just adding the cookie that I needed, and the issue was resolved. Following is the sample code that I have used.
var report = new UrlAsPdf(url);
Dictionary<string, string> cookieCollection = new Dictionary<string, string>();
foreach (var key in Request.Cookies.AllKeys)
{
if (Crypto.Hash("_user").Equals(key))
{
cookieCollection.Add(key, Request.Cookies.Get(key).Value);
break;
}
}
report.Cookies = cookieCollection;
report.FormsAuthenticationCookieName = FormsAuthentication.FormsCookieName;

Categories