This question already has an answer here:
And they seem to be fairly large skips, in the thousands even. Here's an example:
The last Ids of this table go above the 30.000 mark even though there are less than a thousand rows in the table.
- What is causing this?
- How can I prevent this in the future?
- How can I stop my current table from continuing on this dark road?
EDIT: This is happening in an old, small desktop application of mine and I have not ever deleted rows from this table nor have I used any transactions and rollbacks.
Best How To :
What is causing this?
A couple of potential causes come to mind:
- Rows were deleted?
- The results you're looking at aren't sorted by
- Identifiers were allocated in a transaction which wasn't committed?
- The database engine allocated potential identifiers as an internal performance tuning and that allocation was lost (unexpected server re-start, for example)?
There could be more potential causes I'm not thinking of.
How can I prevent this in the future?
Depends on the cause. But it's kind of a moot point, really. Why would you need to "prevent this"? What exactly is wrong with this situation? Identifiers don't need to be consecutive, they just need to be unique. (And preferably sequential for an index, otherwise the database will have to re-build the index.)
How can I stop my current table from continuing on this dark road?
The dark road of... generating unique identifiers? Well, I guess you could manually supply unique identifiers. GUIDs are good for that sort of thing. There are pros and cons, though. (I don't know if recent implementations have improved this, but historically GUIDs don't make for a good clustered index.)