Mysql InnoDB: running out of AUTO_INCREMENT values

  • Автор темы Federico Razzoli
  • Дата начала

Federico Razzoli

Guest
Most InnoDB primary keys are built on integer columns with the AUTO_INCREMENT option (which is a very good practice for reasons that are outside of the purpose of this article). But we have to monitor that we are not going to run out of AUTO_INCREMENT value. If this happens, we will get errors like this:

ERROR 167 (22003): Out of range value for column 'a' at row 1

Obviously, when creating tables, we should use a type that is sufficiently big, and make it UNSIGNED to avoid wasting half of its space. But there are also some details about AUTO_INCREMENT that we should remember.

First, the values to monitor are not MAX(id), but they are the AUTO_INCREMENT column in information_schema TABLES. This column shows the next autoincr value.

A used value is lost. It doesn’t matter if you delete a row: its autoincr value will not be reused. This is why we should monitor the next values from information_schema: it is possible that we used all autoincr values, but MAX(id) is much lower.

Values are lost even for transactions that fail or are rolled back. This is the reason why MAX(id) is generally higher, sometimes much higher, than the next AUTO_INCREMENT value. This can happen especially if we have periodical bulk inserts inserting many rows in one transaction. If such operation fails, it will waste autoincr values. If it fails often, we could run out of values.

Easy trick


An easy trick could be use a command like this:

ALTER TABLE table_name AUTO_INCREMENT = 1000;

But this only works if all values in the table are lower than the specified AUTO_INCREMENT. This can be useful if a bulk insert failed and no new row was still added, but typically this only postpones a bit the problem.

Or again, we could change all id’s to fill all the holes, and then set AUTO_INCREMENT. This is possible if, for example, the server is not used by night, or during the week end. But in other cases, it is not a viable solution. Modifying a primary key is not a cheap operation for InnoDB, and most likely, we will have to do this with many rows. Also, we would need to lock the table to avoid that new rows (with high values) are inserted in the process.

So, what can we do?


If we have a write-heavy workload, lowering the autoincr values is not trivial, because new rows are being inserted any time. And changing an id for many rows can be a long operation. If we do it with one transaction to make things faster, it’s locking. But well, we can do something similar to what pt-online-schema-change does to alter a table without locks. Let’s see the procedure step by step.

For our example, we will assume the following trivial table:

CREATE TABLE employee
(
id INT UNSIGNED AUTO_INCREMENT,
name VARCHAR(100),
PRIMARY KEY (id)
);
1)


First, let’s create a table with the same structure. The id’s in the new table will start from 1. And let’s add a column, which is a reference to the original table’s id.

CREATE TABLE tmp_employee LIKE employee;
ALTER TABLE tmp_employee ADD COLUMN orig_id INT UNSIGNED NULL;
ALTER TABLE tmp_employee ADD UNIQUE INDEX idx_orig_id (orig_id);
2)


Now, we need to add a trigger to the original table, to add the new rows in the new table:

DELIMITER ||

CREATE TRIGGER employee_ai
AFTER INSERT
ON employee
FOR EACH ROW
BEGIN
INSERT INTO tmp_employee (old_id, name) VALUES (NEW.id, NEW.name);
END ||

DELIMITER ;

Note that original table’s id and the new table’s old_id columns are kept in sync.
This is just an example. In the real world, I would also create triggers for DELETE and UPDATE. This triggers need a reference to the new table, that’s why old_id exists and is indexed.

What if the table already has triggers? Fortunately MySQL 5.7 and MariaDB 10.2 support multiple triggers per timing/event. With older versions, just modify them and add the queries you need.

3)


Copy rows from the original table to the new table. If the new table already has some rows (added by our INSERT trigger) we won’t try to copy it.

INSERT INTO tmp_employee (old_id, name)
SELECT id, name
FROM employee
WHERE
(SELECT COUNT(*) FROM tmp_employee) = 0
OR id < (SELECT MIN(old_id) FROM tmp_employee);

We are still populating the old_id column for DELETE and UPDATE triggers.

We didn’t mention the new table’s primary key, so the values are automatically generated.

4)


Until now, I assume we didn’t cause any damage. But the next step is more dangerous. Before proceeding, verify that we didn’t do anything wrong and our transactions didn’t fail.

5)


If everything’s ok, let’s switch the tables:

RENAME TABLE
employee TO old_employee,
tmp_employee TO employee;

This operation is atomic, so no query is expected to fail.

6)


Drop the old table and the old_id column.

Enjoy!



Читать дальше...
 
Сверху