I am entering student id as a randon number into the DB
int num = r.Next(1000);
Session["number"] = "SN" + (" ") + num.ToString();
But is there any chance of getting a duplicate number?How can i avoid this?
EDIT :: I have a identity column and the student id is separate from the ID,i am going to enter a random student id into the DB from UI.
It is a very common task to have a column in a DB that is merely an integer unique ID. So much so that every database I've ever worked with has a specific column type, function, etc. for dealing with it. It will vary based on whatever specific database you use, but you should figure out what that is and use it.
You need a value that is unique not, random. The two are different. Random numbers repeat, they aren't unique. Unique numbers also aren't random. For example, if you just increment numbers up from 0 it will be unique, but that's not in any way random.
You could use a GUID, which would be unique, but it would be 128 bits. That's pretty big. Most databases will just have a counter that they increment every time you add an item, so 32 bits is usually enough. This will save you a lot of space. Incrementing a counter is also quicker than calculating a GUID's new value. For DB operations that tend to involve adding lots of items, that could matter.
As Jodrell mentions in the comments, you should also consider the size of the index if you use a GUID or other large field. Storing and maintaining that index will be much more expensive (in both time and space) with column that needs that many more bits.
If you try to do something yourself there's a good chance you'll do it wrong. Either your algorithm won't be entirely unique, it will have race conditions due to improper synchronization, it will be less performant because of excessive synchronization, it will be significantly larger because that's what it took to reduce the risk of collisions, etc. At the end of the day the database will have access to tools that you don't; let it take care of it so you don't need to worry about what you could mess up.
Sure there is a very likely chance that you will get a duplicate number. Next is just giving you a number between 0 and 1000, but there is no guarantee that the number will not be some number that Next has returned in the past.
If you are trying to work with unique values, look into possibly using Guids instead of integers or have a constantly increasing integer value instead of any random number. Here the reference page on Guid
http://msdn.microsoft.com/en-us/library/system.guid.aspx
you can use Guid's instead of random int , they are going to always be unique
There is no way to guarentee an int is unique unless you check every one that already exists, and even then - like the comments say , you are guarenteed duplicates when you pass 1000 ids
EDIT:
I mention that I think Guid's are best here because of the question , first indexing the table is not going to take long at all - it is assumed that there are going to be less then 1000 students because of the size of int, 128 bits is fine in a table with less then 1000 rows.
Guid's are a good thing to learn - even though they are not always the most effecient way
Creating a unique Guid in c# has a benifit that you can keep using and displaying that id - like in the question , without another trip to Db to figure out which unique id was assigned to the student
Yes, you will get duplicates. If you want a truly unique item, you will need to use Guid. If you still want to use numbers, then you will need to keep track of the numbers you have already used, similar to identity column in database.
Yes, you will certainly get duplicates. You could use a GUID instead:
Guid g = Guid.NewGuid();
GUIDs are theoretically "Globally Unique".
You can try to generate id using Guid:
Session["number"] = "SN" + (" ") + Guid.NewGuid().ToString();
It will highly descrease a chance to get duplicate id.
If you are using random numbers then no there is no way of avoiding it. There will always be a chance of a collision.
I think what you are probably looking for is an Identity column, or whatever the equivalent is for your database server.
In LINQ to SQL it is possible to set row like this:
[Column ( IsPrimaryKey = true, IsDbGenerated = true )]
public int ID { get; set; }
I dont know if it helps you in asp, but maybe it is a good hint...
Yes there is a chance of course.
Quick solution:
Check if it is a duplicate number first and try again until it is no longer a duplicate number.
Related
I am developing an application which receives packets from network and stores them into database. in one part, I save dns records to db, in this format:
IP Address(unsigned 32bit integer)
DNS record(unlimited string)
The rate of DNS records, is about 10-100 records per second. As it's realtime, I have not enough time to check for duplicates by string search in database. I was thinking of a good method to get a unique short integer (you say,64 bit) per given unique string. So my search, from string search, becomes number search and lets me check for duplicates faster. Any idea about implementations of what I told, or better approaches is appreciated. samples in C# are preferred. but any good idea is welcomed.
I would read through this, talking about hashing strings into integers, and since addresses are pretty long (letter wise), I would use some modulo function to keep it in integer limits.
The results would be checked with a hash table for duplicates.
This could be done for the first 20 letters, and then the next 20 for a nested hash table if required and so on.
Make sure you set up your table-indexes and Primary Keys in the table correctly.
Load the table contents asynchronuosly every couple of seconds and populate a generic dictionary<long,string> with it.
Perform the search on the dictionary as it is optimized for searches. If you need it even faster, use a hashtable.
Flush the newly added entries in a Transaction asynchronuosly into the DB.
P.S. Your Scenario is to vague to create a decent code example.
I know similar questions have been asked, but I have a rather different scenario here.
I have a SQL Server database which will store TicketNumber and other details. This TicketNumber is generated randomly from a C# program, which is passed to the database and stored there. The TicketNumber must be unique, and can be from 000000000-999999999.
Currently, what I do is: I will do a select statement to query all existing TicketNumber from the database:
Select TicketNumber from SomeTable
After that, I will load all the TicketNumber into a List:
List<int> temp = new List<int>();
//foreach loop to add all numbers to the List
Random random = new Random();
int randomNumber = random.Next(0, 1000000000);
if !(temp.Contain(randomNumber))
//Add this new number to the database
There is no problem with the code above, however, when the dataset get larger, the performance is deteriorating. (I have close to hundred thousand of records now). I'm wondering if there is any more effective way of handling this?
I can do this from either the C# application or the SQL Server side.
This answer assumes you can't change the requirements. If you can use a hi/lo scheme to generate unique IDs which aren't random, that would be better.
I assume you've already set this as a primary key in the database. Given that you've already got the information in the database, there's little sense (IMO) in fetching it to the client as well. That goes double if you've got multiple clients (which seems likely - if not now then in the future).
Instead, just try to insert a record with a random ID. If it works, great! If not, generate a new random number and try again.
After 1000 days, you'll have a million records, so roughly one in a thousand inserts will fail. That's only one a day - unless you've got some hard limit on the insertion time, that seems pretty reasonable to me.
EDIT: I've just thought of another solution, which would take a bunch of storage, but might be quite reasonable otherwise... create a table with two columns:
NaturalID ObfuscatedID
Prepopulate that with a billion rows, which you generate by basically shuffling all the possible ticket IDs. It may take quite a while, but it's a one-off cost.
Now, you can use an auto-incrementing ID for your ticket table, and then either copy the corresponding obfuscated ID into the table as you populate it, or join into it when you need the ticket ID.
You can create a separate table with only one column . Lets just name it UniqueID for now. Populate that column with UniqueID = 000000000-999999999. Everytime you want to generate a random number, do something like
SELECT TOP 1 UniqueID From (Table) WHERE UniqueID NOT IN (SELECT ID FROM (YOUR TABLE))
Code has not been tested but just to show the idea
I need to count the total number of rows in a table before the certain row ID.
I have this query
select count (ClientID)
FROM [Seek].[dbo].[seekClient]
where ClientID < '12'
which works fine for the case of integer primary key, but I am not sure how to do that in case of GUID ??
Kindly help me in this case.
Thanks
Short answer, this isn't possible, see this link. Most specifically:
Globally unique identifiers are typically not human readable, and they
are not intended to be read or interpreted by humans
Long answer - what's the rest of your table structure? There may be a different way to do what you're trying to do (I would imagine it's possible using a date created field if you have one)
You should use a different column (not id) that defines what you mean by 'before'. It might be, for example, a 'DateOfCreation', 'creation_date' etc. column.
I was thinking of formatting it like this
TYYYYMMDDNNNNNNNNNNX
(1 character + 19 digits)
Where
T is type
YYYY is year
MM is month
DD is day
N is sequencial number
X is check digit
The problem is, how do I generate the sequencial number? since my primary key is not an auto increment integer value, if it was i would use that, but its not.
EDIT can I have the sequencial number resets itself after 1 day (24hours).
P201012080000000001X <-- first
transaction of 2010/12/08
P2010120810000000002X <--- second
transaction of 2010/12/08
P201012090000000001X <--- First
transaction of 2010/12/09
(X is the check digit)
The question is meaningless without a context. Others have commented on your question. Please answer the comments. What is the "transaction number" for; where is it used; what is the "transaction" that you need an external identifier for.
Identity or auto-increment columns may have some use internally, but they are quite useless outside the database.
If we had the full schema, knowing which components are PKs that will not change, etc, we could provide a more meaningful answer.
At first glance, without the info requested, I see no point in recording date in the "transaction" (the date is already stored in the transaction row)
You seem to have the formula for your transaction number, the only question you really have is how to generate a sequence number that resets each day.
You can consider the following options:
Use a database sequence and a scheduled job that resets it.
Use a sequence from outside the database (for instance, a file or memory structure).
With the proper isolation level, you should be able to include the (SELECT (MAX(Seq) + 1) FROM Table WHERE DateCol = CURRENT_DATE) as a value expression in your INSERT statement.
Also note that there's probably no real reason to actually store the transaction number in the database as it's easy to derive it from the information it encodes. All you need to store is the sequential number.
You can track the auto-incs separately.
Or, as you get ready to add a new transaction. First poll the DB for the newest transaction and break that apart to find the number, and increase that.
Or add an auto-inc field, but don't use it as a key.
You can use a uuid generator so that you don't have to mind about a sequence and you are sure not to have collision between transactions.
eg :
in java :
java.util.UUID.randomUUID()
05f4c168-083a-4107-84ef-10346fad6f58
5fb202f1-5d2a-4d59-bbeb-5bcabd513520
31836df6-d4ee-457b-a47a-d491d5960530
3aaaa3c2-c1a0-4978-9ca8-be1c7a0798cf
in php :
echo uniqid()
4d00fe31232b6
4d00fe4eeefc2
4d00fe575c262
there is a UUID generator in barely all languages.
A primary key that big is a very, very bad idea. You will waste huge amounts of table space unnecessarily and make your table very slow to query and manage. Make you primary key a small simple incrementing int and store the transaction date in a separate field. When necessary in a query you can select a transaction number for that day with:
SELECT ROW_NUMBER OVER (PARTITION BY TxnDate ORDER BY TxnID), TxnDate, ...
Please read this regarding good primary key selection criteria. http://www.sqlskills.com/BLOGS/KIMBERLY/category/Indexes.aspx
I was wondering if anyone has a good solution to a problem I've encountered numerous times during the last years.
I have a shopping cart and my customer explicitly requests that it's order is significant. So I need to persist the order to the DB.
The obvious way would be to simply insert some OrderField where I would assign the number 0 to N and sort it that way.
But doing so would make reordering harder and I somehow feel that this solution is kinda fragile and will come back at me some day.
(I use C# 3,5 with NHibernate and SQL Server 2005)
Thank you
Ok here is my solution to make programming this easier for anyone that happens along to this thread. the trick is being able to update all the order indexes above or below an insert / deletion in one update.
Using a numeric (integer) column in your table, supported by the SQL queries
CREATE TABLE myitems (Myitem TEXT, id INTEGER PRIMARY KEY, orderindex NUMERIC);
To delete the item at orderindex 6:
DELETE FROM myitems WHERE orderindex=6;
UPDATE myitems SET orderindex = (orderindex - 1) WHERE orderindex > 6;
To swap two items (4 and 7):
UPDATE myitems SET orderindex = 0 WHERE orderindex = 4;
UPDATE myitems SET orderindex = 4 WHERE orderindex = 7;
UPDATE myitems SET orderindex = 7 WHERE orderindex = 0;
i.e. 0 is not used, so use a it as a dummy to avoid having an ambiguous item.
To insert at 3:
UPDATE myitems SET orderindex = (orderindex + 1) WHERE orderindex > 2;
INSERT INTO myitems (Myitem,orderindex) values ("MytxtitemHere",3)
Best solution is a Doubly Linked list. O(1) for all operations except indexing. Nothing can index SQL quickly though except a where clause on the item you want.
0,10,20 types fail. Sequence column ones fail. Float sequence column fails at group moves.
Doubly Linked list is same operations for addition, removal, group deletion, group addition, group move. Single linked list works ok too. Double linked is better with SQL in my opinion though. Single linked list requires you to have the entire list.
FWIW, I think the way you suggest (i.e. committing the order to the database) is not a bad solution to your problem. I also think it's probably the safest/most reliable way.
How about using a linked list implementation? Having one column the will hold the value (order number) of the next item. I think it's by far the easiest to use when doing insertion of orders in between. No need to renumber.
Unfortunately there is no magic bullet for this. You cannot guarentee the order of any SELECT statement WITHOUT an order by clause. You need to add the column and program around it.
I don't know that I'd recommend adding gaps in the order sequence, depending on the size of your lists and the hits on the site, you might gain very little for the over head of handling the logic (you'd still need to cater for the occasion where all the gaps have been used up). I'd take a close look to see what benifits this would give you in your situation.
Sorry I can't offer anything better, Hope this helped.
I wouldn't recommend the A, AA, B, BA, BB approach at all. There's a lot of extra processing involved to determine hierarchy and inserting entries in between is not fun at all.
Just add an OrderField, integer. Don't use gaps, because then you have to either work with a non-standard 'step' on your next middle insert, or you will have to resynchronize your list first, then add a new entry.
Having 0...N is easy to reorder, and if you can use Array methods or List methods outside of SQL to re-order the collection as a whole, then update each entry, or you can figure out where you are inserting into, and +1 or -1 each entry after or before it accordingly.
Once you have a little library written for it, it'll be a piece of cake.
I would just insert an order field. Its the simplest way. If the customer can reorder the fields or you need to insert in the middle then just rewrite the order fields for all items in that batch.
If down the line you find this limiting due to poor performance on inserts and updates then it is possible to use a varchar field rather than an integer. This allows for quite a high level of precision when inserting. eg to insert between items 'A' and 'B' you can insert an item ordered as 'AA'. This is almost certainly overkill for a shopping cart though.
On a level of abstraction above the cart Items let's say CartOrder (that has 1-n with CartItem) you can maintain a field called itemOrder which could be just a comma - separated list of id(PK) of cartItem records relevant . It will be at application layer that you require to parse that and arrange your item models accordingly . The big plus for this approach will be in case of order reshufflings , there might not be changes on individual objects but since order is persisted as an index field inside the order item table rows you will have to issue an update command for each one of the rows updating their index field.
Please let me know your criticisms on this approach, i am curious to know in which ways this might fail.
I solved it pragmatically like this:
The order is defined in the UI.
The backend gets a POST request that contains the IDs and the corresponding Position of every item in the list.
I start a transaction and update the position for every ID.
Done.
So ordering is expensive but reading the ordered list is super cheap.
I would recommend keeping gaps in the order number, so instead of 1,2,3 etc, use 10,20,30... If you need to just insert one more item, you could put it at 15, rather than reordering everything at that point.
Well, I would say the short answer is:
Create a primary key of autoidentity in the cartcontents table, then insert rows in the correct top-down order. Then by selecting from the table with order by the primary key autoidentity column would give you the same list. By doing this you have to delete all items and reinsert then in case of alterations to the cart contents. (But that is still quite a clean way of doing it) If that's not feasible, then go with the order column like suggested by others.
When I use Hibernate, and need to save the order of a #OneToMany, I use a Map and not a List.
#OneToMany(fetch = FetchType.EAGER, mappedBy = "rule", cascade = CascadeType.ALL)
#MapKey(name = "position")
#OrderBy("position")
private Map<Integer, RuleAction> actions = LazyMap.decorate(new LinkedHashMap<>(), FactoryUtils.instantiateFactory(RuleAction.class, new Class[] { Rule.class }, new Object[] { this }));
In this Java example, position is an Integer property of RuleAction so the order is persisted that way. I guess in C# this would look rather similar.