I have a table structure which is nested to 5 levels with a one to many relationship going downwards.
I want to know what's an efficient way to save the this kind of data into the SQL Server. I now loop on each child object (C#) and run an insert which becomes slow if the data is large.
Is there a way to pass the C# directly to SQL in traditional ADO.NET? I have a custom framework which fires a SQL script for each insert, which picks up values from the object properties. I can't move to EF or NHirbernate as it's an existing project.
I have seen ways where C# objects can be inserted into DataTables and then passed to SQl, is that an efficient way?
Please advise.
I'm assuming that you have something like this from a database perspective
CREATE TABLE Items (ID INT -- primary key,
Name VARCHAR(MAX),
ParentID INT) -- foreign key that loops on the same table
and an object like this in C#
public class Item
{
public int ID {get; set;}
public string Name {get; set;}
public int ParentID {get; set;}
public Item Parent {get; set;}
public List<Item> Children {get; set;}
}
and you have some code that looks like:
var root = MakeMeATree();
databaseSaver.SaveToDatabase(root);
that generates an insert-per-item for every child. If you have lots of children, this can really slow up the application.
What I would use (and have used) in this case is a custom sql server type and a stored procedure to save the whole thing in a single call.
You will need to create a type that matches the table:
CREATE TYPE dbo.ItemType AS TABLE
(
ID INT,
Name VARCHAR(MAX),
ParentID INT
);
and a simple procedure that uses the type:
CREATE PROCEDURE dbo.InsertItems
(
#Items AS dbo.ItemType READONLY
)
AS
BEGIN
INSERT INTO SampleTable(ID, Name, ParentID)
SELECT ID, Name, ParentID From #Items
END
Now, that does it from the SQL Server side. Now on to the C# side. You need to do two things:
Flatten the hierarchy into a list
Sent that list as a datatable to the database
The first can be done using something like this (I use this, which is basically the same thing), with a simple
var items = root.Flatten(i => i.Children);
To do the second thing, first you need to declare the SQL Server type as a datatable:
DataTable dt = new DataTable("Items");
dt.Columns.Add("ID", typeof(int));
dt.Columns.Add("Name", typeof(string));
dt.Columns.Add("ParentID", typeof(int));
next, just fill the values:
foreach(var item in items)
{
dt.Rows.Add(item.ID, item.Name, item.ParentID);
}
and attach them to a SqlParameter, that should be of the SqlDbType.Structured type, like this:
using (var cmd = new SqlCommand("InsertItems", connection))
{
cmd.CommandType = CommandType.StoredProcedure;
var itemsParam = new SqlParameter("#Items", SqlDbType.Structured);
itemsParam .Value = dt;
cmd.Parameters.Add(itemsParam);
cmd.ExecuteNonQuery();
}
And, that should be it.
Yes this if you want your Dataset Objects to be stored in the DataBase u can go with optionslike
Create a SQL UserDefinedType object.
Fill the Object the requierd values and the read the Value in the Stored procedure AND Fill the object value in the Temp and using your business logic u can stored in Db.
Related
I have a query I would like to run via C# application. There is no option to do this outside of the application. I have the following code:
var keyGroupsToCleanUp = new List<string>
{
"Address",
"Manufacturer",
"Product",
"Customer",
"Picture",
"Category",
"Vendor",
"SS_A_Attachment",
"SS_A_AttachmentDownload",
"SS_MAP_EntityMapping",
"SS_MAP_EntityWidgetMapping",
};
foreach (var keyGroup in keyGroupsToCleanUp)
{
_databaseFacade.ExecuteSqlCommand($#"
DELETE
FROM GenericAttribute
WHERE KeyGroup = {keyGroup} AND [Key] = 'CommonId'
AND EntityId NOT IN (SELECT Id FROM [{keyGroup}]);
");
}
I want to loop through each name in the List and run the below query for each of them. When I try to do this, I receive the following error:
System.Data.SqlClient.SqlException (0x80131904): Invalid object name '#p1'.
From what I have gathered after searching online, this is because a Table name cannot be a string. You have to declare a variable and use this variable for the table name. I learned that a Table variable has columns that need to be declared and felt a wave of dread wash over me. None of these tables have the same column structure.
Is what I am trying to do possible? If so, how can I do it?
The GenericAttributes table is one large table that consists of six columns.
When I joined the project that this is being used on it had already been used to the point where it was irreplacable. You can save additional data for a database table in here by specifying the KeyGroup as the Database table. We have a table called "Address" and we save additional data in the GenericAttributes table for the Address (It does not make sense, I know). This causes a lot of issues because a relational database is not meant for this. The query I have written above looks for rows in the GenericAttributes Table that are now detached. For example, the EntityId 0 does not exist as an Id in Address, so it would be returned here. That row must then be deleted, because it is linked to a non-existant entityId.
This is an example of a query that would achieve that:
// Address
_databaseFacade.ExecuteSqlCommand(#"
DELETE
FROM GenericAttribute
WHERE KeyGroup = 'Address' AND [Key] = 'CommonId'
AND EntityId NOT IN (SELECT Id FROM [Address]);
");
I have to do this for 11 tables, so I wanted to make it a bit easier to do. Every query is written in the same way. The only thing that changes is the KeyGroup and the table that it looks for. These will both always have the same name.
Here is an example of another call for Products. They are the same, the only difference is the KeyGroup and the Table in the NOT IN statement.
// Product
_databaseFacade.ExecuteSqlCommand(#"
DELETE
FROM GenericAttribute
WHERE KeyGroup = 'Product' AND [Key] = 'CommonId'
AND EntityId NOT IN (SELECT Id FROM Product);
");
To ensure there is no injection vulnerability, you can use dynamic SQL with QUOTENAME
_databaseFacade.ExecuteSqlRaw(#"
DECLARE #sql nvarchar(max) = N'
DELETE
FROM GenericAttribute
WHERE KeyGroup = #keyGroup AND [Key] = ''CommonId''
AND EntityId NOT IN (SELECT Id FROM ' + {0} + ');
';
EXEC sp_executesql #sql,
N'#keyGroup nvarchar(100)',
#keyGroup = {0};
", keyGroup);
Note how ExecuteSqlRaw will interpolate the string. Do not interpolate it yourself with $
At a guess, you're using Entity Framework Core. The ExecuteSqlCommand method accepts a FormattableString, and converts any placeholders into command parameters. But your placeholders appear to be column/table names, which cannot be passed as parameters.
Since there's also an overload which accepts a string, which has different behaviour, this method has been marked as obsolete, and replaced by ExecuteSqlInterpolated and ExecuteSqlRaw.
Assuming none of your values can be influenced by the user, and you're happy that you're not going to introduce a SQL Injection vulnerability, you can use ExecuteSqlRaw instead:
_databaseFacade.ExecuteSqlRaw($#"
DELETE
FROM GenericAttribute
WHERE KeyGroup = [{keyGroup}] AND [Key] = 'CommonId'
AND EntityId NOT IN (SELECT Id FROM [{keyGroup}]);
");
Try following:
foreach (var keyGroup in keyGroupsToCleanUp)
{
var sql = #"DELETE FROM GenericAttribute
WHERE KeyGroup = #Group
AND [Key] = 'CommonId'
AND EntityId NOT IN (SELECT Id FROM #Group)"; // Or [#Group], depends on schema
_databaseFacade.ExecuteSqlCommand(
sql,
new SqlParameter("#Group", keyGroup));
This code assumes, that ExecuteSqlCommand in your facade follows standard Microsoft pattern (same overrides as Microsoft's ones).
I have a List containing ids that I want to insert into a temp table using Dapper in order to avoid the SQL limit on parameters in the 'IN' clause.
So currently my code looks like this:
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
return db.Query<int>(
#"SELECT a.animalID
FROM
dbo.animalTypes [at]
INNER JOIN animals [a] on a.animalTypeId = at.animalTypeId
INNER JOIN edibleAnimals e on e.animalID = a.animalID
WHERE
at.animalId in #animalIds", new { animalIds }).ToList();
}
}
The problem I need to solve is that when there are more than 2100 ids in the animalIds list then I get a SQL error "The incoming request has too many parameters. The server supports a maximum of 2100 parameters".
So now I would like to create a temp table populated with the animalIds passed into the method. Then I can join the animals table on the temp table and avoid having a huge "IN" clause.
I have tried various combinations of syntax but not got anywhere.
This is where I am now:
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
db.Execute(#"SELECT INTO #tempAnmialIds #animalIds");
return db.Query<int>(
#"SELECT a.animalID
FROM
dbo.animalTypes [at]
INNER JOIN animals [a] on a.animalTypeId = at.animalTypeId
INNER JOIN edibleAnimals e on e.animalID = a.animalID
INNER JOIN #tempAnmialIds tmp on tmp.animalID = a.animalID).ToList();
}
}
I can't get the SELECT INTO working with the list of IDs. Am I going about this the wrong way maybe there is a better way to avoid the "IN" clause limit.
I do have a backup solution in that I can split the incoming list of animalIDs into blocks of 1000 but I've read that the large "IN" clause sufferes a performance hit and joining a temp table will be more efficient and it also means I don;t need extra 'splitting' code to batch up the ids in to blocks of 1000.
Ok, here's the version you want. I'm adding this as a separate answer, as my first answer using SP/TVP utilizes a different concept.
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
// This Open() call is vital! If you don't open the connection, Dapper will
// open/close it automagically, which means that you'll loose the created
// temp table directly after the statement completes.
db.Open();
// This temp table is created having a primary key. So make sure you don't pass
// any duplicate IDs
db.Execute("CREATE TABLE #tempAnimalIds(animalId int not null primary key);");
while (animalIds.Any())
{
// Build the statements to insert the Ids. For this, we need to split animalIDs
// into chunks of 1000, as this flavour of INSERT INTO is limited to 1000 values
// at a time.
var ids2Insert = animalIds.Take(1000);
animalIds = animalIds.Skip(1000).ToList();
StringBuilder stmt = new StringBuilder("INSERT INTO #tempAnimalIds VALUES (");
stmt.Append(string.Join("),(", ids2Insert));
stmt.Append(");");
db.Execute(stmt.ToString());
}
return db.Query<int>(#"SELECT animalID FROM #tempAnimalIds").ToList();
}
}
To test:
var ids = LoadAnimalTypeIdsFromAnimalIds(Enumerable.Range(1, 2500).ToList());
You just need to amend your select statement to what it originally was. As I don't have all your tables in my environment, I just selected from the created temp table to prove it works the way it should.
Pitfalls, see comments:
Open the connection at the beginning, otherwise the temp table will
be gone after dapper automatically closes the connection right after
creating the table.
This particular flavour of INSERT INTO is limited
to 1000 values at a time, so the passed IDs need to be split into
chunks accordingly.
Don't pass duplicate keys, as the primary key on the temp table will not allow that.
Edit
It seems Dapper supports a set-based operation which will make this work too:
public IList<int> LoadAnimalTypeIdsFromAnimalIdsV2(IList<int> animalIds)
{
// This creates an IEnumerable of an anonymous type containing an Id property. This seems
// to be necessary to be able to grab the Id by it's name via Dapper.
var namedIDs = animalIds.Select(i => new {Id = i});
using (var db = new SqlConnection(this.connectionString))
{
// This is vital! If you don't open the connection, Dapper will open/close it
// automagically, which means that you'll loose the created temp table directly
// after the statement completes.
db.Open();
// This temp table is created having a primary key. So make sure you don't pass
// any duplicate IDs
db.Execute("CREATE TABLE #tempAnimalIds(animalId int not null primary key);");
// Using one of Dapper's convenient features, the INSERT becomes:
db.Execute("INSERT INTO #tempAnimalIds VALUES(#Id);", namedIDs);
return db.Query<int>(#"SELECT animalID FROM #tempAnimalIds").ToList();
}
}
I don't know how well this will perform compared to the previous version (ie. 2500 single inserts instead of three inserts with 1000, 1000, 500 values each). But the doc suggests that it performs better if used together with async, MARS and Pipelining.
In your example, what I can't see is how your list of animalIds is actually passed to the query to be inserted into the #tempAnimalIDs table.
There is a way to do it without using a temp table, utilizing a stored procedure with a table value parameter.
SQL:
CREATE TYPE [dbo].[udtKeys] AS TABLE([i] [int] NOT NULL)
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[myProc](#data as dbo.udtKeys readonly)AS
BEGIN
select i from #data;
END
GO
This will create a user defined table type called udtKeys which contains just one int column named i, and a stored procedure that expects a parameter of that type. The proc does nothing else but to select the IDs you passed, but you can of course join other tables to it. For a hint regarding the syntax, see here.
C#:
var dataTable = new DataTable();
dataTable.Columns.Add("i", typeof(int));
foreach (var animalId in animalIds)
dataTable.Rows.Add(animalId);
using(SqlConnection conn = new SqlConnection("connectionString goes here"))
{
var r=conn.Query("myProc", new {data=dataTable},commandType: CommandType.StoredProcedure);
// r contains your results
}
The parameter within the procedure gets populated by passing a DataTable, and that DataTable's structure must match the one of the table type you created.
If you really need to pass more that 2100 values, you may want to consider indexing your table type to increase performance. You can actually give it a primary key if you don't pass any duplicate keys, like this:
CREATE TYPE [dbo].[udtKeys] AS TABLE(
[i] [int] NOT NULL,
PRIMARY KEY CLUSTERED
(
[i] ASC
)WITH (IGNORE_DUP_KEY = OFF)
)
GO
You may also need to assign execute permissions for the type to the database user you execute this with, like so:
GRANT EXEC ON TYPE::[dbo].[udtKeys] TO [User]
GO
See also here and here.
For me, the best way I was able to come up with was turning the list into a comma separated list in C# then using string_split in SQL to insert the data into a temp table. There are probably upper limits to this, but in my case I was only dealing with 6,000 records and it worked really fast.
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
return db.Query<int>(
#" --Created a temp table to join to later. An index on this would probably be good too.
CREATE TABLE #tempAnimals (Id INT)
INSERT INTO #tempAnimals (ID)
SELECT value FROM string_split(#animalIdStrings)
SELECT at.animalTypeID
FROM dbo.animalTypes [at]
JOIN animals [a] ON a.animalTypeId = at.animalTypeId
JOIN #tempAnimals temp ON temp.ID = a.animalID -- <-- added this
JOIN edibleAnimals e ON e.animalID = a.animalID",
new { animalIdStrings = string.Join(",", animalIds) }).ToList();
}
}
It might be worth noting that string_split is only available in SQL Server 2016 or higher or if using Azure SQL then compatibility mode 130 or higher. https://learn.microsoft.com/en-us/sql/t-sql/functions/string-split-transact-sql?view=sql-server-ver15
I created a user-defined table type in SQL Server:
CREATE TYPE dbo.TestType AS TABLE
(
ColumnA int,
ColumnB nvarchar(500)
)
And I'm using a stored procedure to insert records into the database:
create procedure [dbo].[sp_Test_CustomType]
#testing TestType READONLY
as
insert into [dbo].[myTable]
select ColumnA, ColumnB
from #testing
And I would like to use EF to execute this stored procedure, but here's the problem: how can I pass a user defined table to the stored procedure?
I tried adding the stored procedure to the model, but I'm unable to find the desired stored procedure in the updated context.
What I'm trying to do is to execute a bulk insert to a table, here's the method that I'm currently using:
List<items> itemToInsertToDB = //fetchItems;
foreach(items i in itemToInsertToDB)
{
context.sp_InsertToTable(i.ColumnA, i.ColumnB)
}
Currently, I use a foreach loop to loop through the list to insert item to DB, but if the list have a lot of items, then there will be a performance issue, so, I'm thinking of passing a list to the stored procedure and do the insert inside.
So how to solve this problem? or are there any better ways to do this?
Lets say you want to send a table with a single column of GUIDs.
First we need to create a structure using SqlMetaData which represents the schema of the table (columns).
The below code demonstrates one column named "Id" of the GUID is the SQL stored procedure parameter table type
var tableSchema = new List<SqlMetaData>(1)
{
new SqlMetaData("Id", SqlDbType.UniqueIdentifier)
}.ToArray();
Next you create a list of records that match the schema using SqlDataRecord.
The below code demonstrates how to add the items inside a list using the above created schema. Create a new SqlDataRecord for each of the items in the list. Replace SetGuid with the corresponding type and Replace Guid.NewGuid() as the corresponding value.
Repeat new SqlDataRecord for each item and add them to a List
var tableRow = new SqlDataRecord(tableSchema);
tableRow.SetGuid(0, Guid.NewGuid());
var table = new List<SqlDataRecord>(1)
{
tableRow
};
Then create the SqlParameter:
var parameter = new SqlParameter();
parameter.SqlDbType = SqlDbType.Structured;
parameter.ParameterName = "#UserIds"; //#UserIds is the stored procedure parameter name
parameter.TypeName = "{Your stored procedure type name}"
parameter.Value = table;
var parameters = new SqlParameter[1]
{
parameter
};
Then simply call the stored procedure by using the Database.SqlQuery.
IEnumerable<ReturnType> result;
using (var myContext = new DbContext())
{
result = myContext.Database.SqlQuery<User>("GetUsers #UserIds", parameters)
.ToList(); // calls the stored procedure
// ToListAsync(); // Async
{
In SQL Server, create your User-Defined Table Type (I suffix them with TTV, Table Typed Value):
CREATE TYPE [dbo].[UniqueidentifiersTTV] AS TABLE(
[Id] [uniqueidentifier] NOT NULL
)
GO
Then specify the type as a parameter (don't forget, Table Type Values have to be readonly!):
CREATE PROCEDURE [dbo].[GetUsers] (
#UserIds [UniqueidentifiersTTV] READONLY
) AS
BEGIN
SET NOCOUNT ON
SELECT u.* -- Just an example :P
FROM [dbo].[Users] u
INNER JOIN #UserIds ids On u.Id = ids.Id
END
I suggest you not using Stored Procedure to insert bulk data, but just rely to Entity Framework insert mechanism.
List<items> itemToInsertToDB = //fetchItems;
foreach(items i in itemToInsertToDB)
{
TestType t = new TestType() { ColumnA = i.ColumnA, ColumnB = i.ColumnB };
context.TestTypes.Add(t);
}
context.SaveChanges();
Entity framework will smartly perform those insertion in single transaction and (usually) in single query execution, which will almost equal to executing stored procedure. This is better rather than relying on stored procedure just to insert bulk of data.
I made a stored procedure in sql server 2008 which gives me the changes made to a table. I am using Linq to SQL to use this table in C#.
my stored procedure is
CREATE PROCEDURE dbo.getlog
-- Add the parameters for the stored procedure here
#p1 int = 0,
#p2 int = 0
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
DECLARE #from_lsn binary(10), #to_lsn binary(10)
SET #from_lsn =
sys.fn_cdc_get_min_lsn('dbo_User_Info')
SET #to_lsn = sys.fn_cdc_get_max_lsn()
SELECT ID_number, Name, Age FROM cdc.fn_cdc_get_all_changes_dbo_User_Info
(#from_lsn, #to_lsn, N'all');
END
GO
The above procedure runs fine in sql server. However when i run this statement using linq in C#
mytestDataContext obj = new mytestDataContext();
var test=obj.ExecuteCommand("dbo.getlog");
foreach( var abc in test)
{}
I get this error
Error 1 foreach statement cannot operate on variables of type 'int'
because 'int' does not contain a public definition for
'GetEnumerator'
ExecuteCommand returns an int.. not your results.
See MSDN here: http://msdn.microsoft.com/en-us/library/system.data.linq.datacontext.executecommand.aspx
public int ExecuteCommand(
string command,
params Object[] parameters
)
I think you're after ExecuteQuery.
ExecuteCommand method returns Int32 and you can't use magical foreach loop using a simple integer.
Return Value
Type: System.Int32
The number of rows modified by the executed command.
I'm not too much familiar with DataContext class but you can use DataContext.ExecuteQuery which returns IEnumerable<TResult> and you can use foreach loop with it.
Return Value
Type: System.Collections.Generic.IEnumerable<TResult>
A collection of objects returned by the query.
I don't know why do you use foreach-statement, but method 'ExecuteCommand' returns int value, and foreach cycle need object that implements IEnumerable
I may be assuming too much, but if you are doing the C# with a recent version of Visual Studio, rather directly specifying the T-SQL call to run the stored procedure as string of literal text, you can drag and drop the stored procedure onto the LINQ to SQL modelling window. This will add it to the LINQ to SQL data context.
int param1 = 1;
int param2 = 2;
mytestDataContext obj = new mytestDataContext();
var test=obj.getlog(param1, param2);
foreach( var abc in test)
{
/* code inside loop */
}
The same technique can be used for calling user-defined functions.
Doing this will reduce typing and provide intellisense to help with calling the stored procedures and SQL functions.
You can use this library:
https://github.com/mrmmins/C-StoreProcedureModelBinding
Returns the values as a List, you only need create a simple class with the names and values types, like:
var productos = DataReaderT.ReadStoredProceadures<MyCustomModel>(myDbEntityInstance, "dbo.MySPName", _generic);
and MyCumtomModel class is something like:
public int id {get; set;}
public int salary {get; set;}
public string name {get; set;}
public string school {get; set;}
and generic like:
List<Generic> _generic = = new List<Generic>
{
new Generic
{
Key = "#phase", Type = SqlDbType.Int, Value = "207"
}
}
};
And now, your products has the options like: products.First(), products.Count(), foreach etc.
In a project I am currently working on, I need to access 2 databases in LINQ in the following manner:
I get a list of all trip numbers between a specified date range from DB1, and store this as a list of 'long' values
I perform an extensive query with a lot of joins on DB2, but only looking at trips that have their trip number included in the above list.
Problem is, the trip list from DB1 often returns over 2100 items - and I of course hit the 2100 parameter limit in SQL, which causes my second query to fail. I've been looking at ways around this, such as described here, but this has the effect of essentially changing my query to LINQ-to-Objects, which causes a lot of issues with my joins
Are there any other workarounds I can do?
as LINQ-to-SQL can call stored procs, you could
have a stored proc that takes an array as a input then puts the values in a temp table to join on
likewise by taking a string that the stored proc splits
Or upload all the values to a temp table yourself and join on that table.
However maybe you should rethink the problem:
Sql server can be configured to allow query against tables in other databases (including oracle), if you are allowed this may be an option for you.
Could you use some replication system to keep a table of trip numbers updated in DB2?
Not sure whether this will help, but I had a similar issue for a one-off query I was writing in LinqPad and ended up defining and using a temporary table like this.
[Table(Name="#TmpTable1")]
public class TmpRecord
{
[Column(DbType="Int", IsPrimaryKey=true, UpdateCheck=UpdateCheck.Never)]
public int? Value { get; set; }
}
public Table<TmpRecord> TmpRecords
{
get { return base.GetTable<TmpRecord>(); }
}
public void DropTable<T>()
{
ExecuteCommand( "DROP TABLE " + Mapping.GetTable(typeof(T)).TableName );
}
public void CreateTable<T>()
{
ExecuteCommand(
typeof(DataContext)
.Assembly
.GetType("System.Data.Linq.SqlClient.SqlBuilder")
.InvokeMember("GetCreateTableCommand",
BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.InvokeMethod
, null, null, new[] { Mapping.GetTable(typeof(T)) } ) as string
);
}
Usage is something like
void Main()
{
List<int> ids = ....
this.Connection.Open();
// Note, if the connection is not opened here, the temporary table
// will be created but then dropped immediately.
CreateTable<TmpRecord>();
foreach(var id in ids)
TmpRecords.InsertOnSubmit( new TmpRecord() { Value = id}) ;
SubmitChanges();
var list1 = (from r in CustomerTransaction
join tt in TmpRecords on r.CustomerID equals tt.Value
where ....
select r).ToList();
DropTable<TmpRecord>();
this.Connection.Close();
}
In my case the temporary table only had one int column, but you should be able to define whatever column(s) type you want, (as long as you have a primary key).
You may split your query or use a temporary table in database2 filled with results from database1.