laravel,eloquent,innodb,laravel-5
Looks like you are running out of memory. Try quering half, of the results, or maybe just 100 to see if that at least fixes the white page, if so use chunk: Movies::chunk(200, function($movies) { foreach($movies $movie) { var_dump($movie); } }); ...
It does not sound like a database connection issue, this sounds like one of the following: Caching You modified your python files and have not restarted the web server process (e.g Apache) to update the python files. I suggest you try reloading your python files on the hosting web server...
In InnoDB, the autoincrement value is not part of the table's metadata and is reset on each server restart. InnoDB uses the following algorithm to initialize the auto-increment counter for a table t that contains an AUTO_INCREMENT column named ai_col: After a server startup, for the first insert into a...
It won't start because you changed the size of your innodb log file size (or in this case; you commented it and MySQL is using the default now). log file ./ib_logfile0 is of different size 0 503316480 bytes InnoDB: than specified in the .cnf file 0 5242880 bytes! If you...
php,mysql,innodb,mysql-slow-query-log
It's using the wrong key instead of the index on the 3 columns. You can hint at indexes with USE KEY ... syntax. https://dev.mysql.com/doc/refman/5.1/en/index-hints.html You may also want to try reordering your key on the 3 columns. Generally, you want the most restricting column to be first in the index,...
mysql,amazon-web-services,innodb,amazon-rds,socialengine
It looks like your issue comes with the fact that overwrote too much in the database.php settings file. Check out database.sample.php for all required setting keys, here it is below for reference: defined('_ENGINE') or die('Access Denied'); return array( 'adapter' => 'mysqli', 'params' => array( 'host' => "rds connection string", 'username'...
Since the API does not provide much information about the behavior when SQLTransientException happens apart from below when the driver has determined that the timeout value that was specified by the setQueryTimeout method has been exceeded and has at least attempted to cancel the currently running Statement Verified the Mysql...
Logically at least, if the score value for a row changes, then the entry for the row with the old value is deleted from the index and the entry with the new value is inserted (or vice versa). This can be a bit more complicated if the update can...
The answer is, that it is indeed "more complicated than that", unfortunately. First, some clarifications. Redo Log The "redo log" is configurable via two parameters: innodb_log_file_size — controls the size of each log file created, in bytes. innodb_log_files_in_group — controls the number of log files to create, each of innodb_log_file_size...
Why is query A taking more than a second to run? Yuck! And yes what you're seeing is exactly how I'd expect it to behave. Primary keys 101 through 300 are reserved immediately while inserting the new rows. This takes a couple of milliseconds. It then spends more than 1...
For performance on that one SELECT: INDEX(tag_id, datetime) -- in this order See my blog for further discussion. Suggest WHERE `datetime` >= '2015-01-01' AND `datetime` < '2015-01-01' + INTERVAL 6 MONTH You were missing all of Jan 1 and most of June 30....
innodb,mysql-5.5,mysql-5.1,mysql-5.6
I've found the problem! The state column was defined as char(1). Interestingly, SELECTs containing state=1 most of the times do use indexes, but UPDATEs never. However, specifying state="1" always uses indexes. So in summary, if col is a char and col is indexed: UPDATE table SET col=2 WHERE col=1 will...
mysql,django,innodb,django-south
I find the problem.The key is innodb_file_format I restore data from a production database backup.Production's mysql config innodb_file_format=Barracuda, but my local is default value Antelope. I think a more proper error message would be ROW_FORMAT=DYNAMIC requires innodb_file_format =Barracuda. What should I do is just set innodb_file_format = Barracuda in my.ini,...
mysql,sql,csv,phpmyadmin,innodb
It turns out it's not a minus(negative) symbol, it's a FRIKKIN' TILDE! When I run this query, I get exact number of rows in my table, the result is the same. select count(writer_id) from writer There is a detailed answer here. Problem, it turns out isn't a problem, has been...
mysql,sql-update,innodb,explain
Yes. You want an index on H_M_SAMP(M_ID): create index idx_h_m_sampe_1 on h_m_sampe(m_id); ...
I figured out the issue. I upgraded python-MySql version to the latest. It at least managed to tell me that there is an error with one of the MySQL tables. Repaired MySQL Table, and everything is working fine. For anyone who faces similar issue it is advisable to install latest...
mysql,concurrency,innodb,myisam
Note: This answer assumes that you are InnoDB which allows row level locking instead of MyISAM which requires table locks. For cases like this, you would use transactions and READ/WRITE locks. The exact details of which you need vary from case to case, and I cannot answer that without knowing...
You should create a relationship between the two, as you can see here you will be able to add a null value into a foreign key field and fill it later with an employee_id. This will help with finding rows, and making sure there is no useless data floating about...
I change my distribution from MariaDB to MySQL,and also test "InnoDB Memcached Plugin" in MySQL 5.1 and 5.5 but this plugin does not work.
For logging quickly followed by analysis... Gather the data into a MyISAM table with no indexes. After 5 min (1.2M rows!): Analyze it into InnoDB "Summary Table(s)". DROP TABLE or TRUNCATE TABLE. The analysis would be put into other table(s). These would have summary information and be much smaller than...
OK found a solution for this. It is possible to use validCheckSum to tell Liquibase that this changeset has been modified. I just modified the changeset and added the tag like this. <changeSet author="author" id="id"> <validCheckSum>oldChecksum</validCheckSum> <validCheckSum>newChecksum</validCheckSum> <addColumn tableName="TABLE"> <column name="COLUMN" type="TEXT" /> </addColumn> </changeSet> After that liquibase accepted the...
Plan A: LOAD the tables in the right order. (Load Countries before any table needing country. Etc.) Plan B: DISABLE KEYS ... do the LOADs ...ENABLE KEYS...
mysql,innodb,database-performance,database-partitioning,pruning
You don't need to do anything if you have no partitioning yet. According to this thread this will be done automatically: you can use ALTER TABLE to add partitioning to the table, keep in mind though that this will actually create the new partitioned table first, then copy over all...
mysql,performance,select,innodb,where-clause
SELECT primary_key FROM key_word WHERE hashed_word='001' is faster than SELECT indexVal FROM key_word WHERE hashed_word='001' because in InnoDB the primary key value is always included in any secondary index; this means that primary_key is read from the index. In the second query however, MySQL first reads the primary key from...
php,mysql,database,timeout,innodb
If this is a web application and you are trying to hang onto the transaction from one page to the next, don't; it won't work. What do you mean by "just after"? If you are doing nothing between the two statements, even a timeout of 1 second should be big...
You must use a procedure to achieve this task: DELIMITER $$ CREATE PROCEDURE InsertRandomRows(IN NumRows INT) BEGIN DECLARE i INT; SET i = 1; START TRANSACTION; WHILE i <= NumRows DO INSERT INTO table_name (status, tel_mobile, tel_home, age, call_time, created_at, fio, address, comment) VALUES ( ROUND(RAND() * 1000000), ROUND(RAND() *...
The information you have written here, what you are looking for, does not state which one of the engines would fit you better. Both meets your requirements. A very simplified way to determine what engine suites you better is if you are looking for speed or consistency, eg: myisam for...
mysql,database,innodb,openshift,cartridge
The default storage engine for mysql is set with an environment variable. Add the cartridge, then run the below to use InnoDB: $ rhc env add OPENSHIFT_MYSQL_DEFAULT_STORAGE_ENGINE=InnoDB -a NameApp $ rhc app-restart NameApp See the mysql cartridge configuration where this is set: https://github.com/openshift/origin-server/blob/master/cartridges/openshift-origin-cartridge-mysql/conf/my.cnf.erb#L39...
mysql,sql,performance,query-optimization,innodb
Your query SELECT Transaction.id .... FROM Transaction INNER JOIN Agent ON Agent.id = Transaction.agent_id INNER JOIN Distributor ON Distributor.id = Transaction.distributor_id INNER JOIN TransactionDetail ON Transaction.id = TransactionDetail.transaction_id WHERE TransactionDetail.type = 'Admin' AND Transaction.status IN ('pending', 'processing', 'success', 'rejected') ORDER BY issued_date DESC LIMIT 0 , 10 Now you already...
SELECT FOR UPDATE locks the rows and any associated index entries, the same as if you issued an UPDATE statement for those rows. but If autocommit is enabled, the rows matching the specification are not locked. MySQL InnoDB locks on joined rows...
Think your id_album must have bean unsigned, Same as id in album
java,mysql,spring,hibernate,innodb
Ok, thanks to @M.Deinum, I've found a solution. First, remove/comment the following line from Hibernate configuration. <property name="hibernate.current_session_context_class">thread</property> Then, add the following line to your Spring context configuration. Don't forgot to add proper namespaces, more info/example here. <tx:annotation-driven/> Also make sure you have transactionManager bean nearby. The last step is...
php,mysql,database,phpmyadmin,innodb
Check out this tutorial, I found this helpful and also includes some configuration files which may be of use http://www.lynda.com/phpMyAdmin-tutorials/Setting-up-foreign-key-constraint/144202/157544-4.html
First of all assuming id is a primary key or at least indexed column. Insert should not lock the table, so chances are any other update/delete query is executing at same time of deletion the records. If it is not the case then it can be due to "gap locking"...
I found the issue was here in my.ini (which can be located in C:\ProgramData\MySQL\MySQL Server 5.6 or similar folder based on installation) There is parameter innodb_flush_log_at_trx_commit which has value defaulted to zero causing 1 MB of innodb_log_buffer_size to be written to disk at each commit. This was resulting in major...
mysql,locking,innodb,read-write
In InnoDB, INSERTs do not take a table lock. (MyISAM is a different story.) Roughly speaking, each read or write to an InnoDB table will lock only the row(s) needed. If there is no overlap between the rows of one query and another, then there is no waiting. Some issues...
mysql,innodb,database-deadlocks
From MySQL Documentation - 14.2.7.9 How to Cope with Deadlocks (highlight added): When modifying multiple tables within a transaction, or different sets of rows in the same table, do those operations in a consistent order each time. Then transactions form well-defined queues and do not deadlock. For example, organize database...
php,mysql,sql,optimization,innodb
You should be able to accomplish this one one query with a subquery for the optional data. I'm using a LEFT JOIN for the extra data so that items that don't have that data will still be returned. The reason for the subquery is because you're using COUNT. Without the...
Apparently this is a known bug within MySQL 5.1 as mentioned within a few bug docs below. http://bugs.mysql.com/bug.php?id=44416 http://bugs.mysql.com/bug.php?id=45844 I have updated the MySQL version to the latest (5.1.73) and then force recovery the InnoDB at level 6 (can only be started at this level for my case). After that,...
The biggest problem with MyISAM tables is that they are not transactional. From what you say, you will only be creating records and then reading them (possibly deleting them later) but there will be little or no editing. This situation is what MyISAM was designed for, it is fast for...
php,mysql,compare,innodb,varchar
Here is SQL snippet how you can convert your value in varchar column to date type. SELECT STR_TO_DATE(SUBSTRING_INDEX(`creator`, '|', -1), '%m-%d-%Y %h:%i %p') http://sqlfiddle.com/#!9/710a2/7/0 Another inspiration how to convert a string to date in mysql? How to split the name string in mysql? I hope it is what you are...
Simple code that does one-row INSERTs without any tuning maxes out at about 100 rows per second in any engine, especially InnoDB. But, it is possible to get 1000 rows per second or even more. The quick fix for InnoDB is to set innodb_flush_log_at_trx_commit = 2; that will uncork the...
mysql,sql,full-text-search,innodb,myisam
Option 3 is better: Upgrade to MySQL 5.6 where FULLTEXT indexes are supported on InnoDB. If you can't do this you're really asking for trouble. Using #1 is futile, it won't scale beyond even the most trivial sized databases. Going with #2 is a bad call, MyISAM is a notoriously...
mysql,sql,optimization,indexing,innodb
Using LOWER(currency) is rendering your index useless. Normalize the data in the table: UPDATE rates SET currency = LOWER(currency) WHERE 1; And make sure that any arguments passed into the query are put into lowercase before it hits the query. Additionally, you can make the currency field an ENUM type...
In principle, InnoDB doesn't need PK sorted in the secondary index. For each record in the secondary index it calls Handler_read_rnd to get fields from PRIMARY index. But for optimal reads it might sort it. It should be possible to check this. After a SELECT that reads from the secondary...
InnoDB uses a feature called Multi-version concurrency control. Locking in shared mode is not required, since MVCC will retain earlier versions of the row for your SELECT statement to be able to read if required. So the answer is 'no', running a SELECT statement will not need to lock any...
mysql,innodb,database-deadlocks
I don't see what went wrong. Suggest you file a bug at bugs.mysql.com. Meanwhile, you could use pt-online-schema-change to do the change with virtually no downtime....
I'm pretty sure the reason is due to a data type mismatch between the column id in sbb_categories and category in sbb_catalog. Removeunsignedfrom the former or add it to the latter. The same goes for all other of your foreign key references. As @Mihai pointed out in a comment you...
I just figured out another way to structure a "KEEP ONLY" type of command! Say you want have something like this: KEEP the tuples that satisfy <massive_expression> All you have to do is negate <massive_expression> in the DELETE command, like so: DELETE FROM table WHERE ! (massive_expression); It makes total...
php,mysql,foreign-keys,primary-key,innodb
Using foreign keys it is up to you, it is not mandatory, it provides you with some functionality such as: Make sure that you don't get a row inserted with a invalid created_by When you delete an account, "if" you want, you can automate the news deletion so you don't...
This may be relevant to InnoDB engine in some circumstances. For InnoDB, with SELECT queries issued in READ COMMITTED and REPEATABLE READ transaction isolation levels a Consistent Read mode is used, which is an implementation of MVCC otherwise known as optimistic concurrency. Under this mode, the reading query doesn't issue...
Using DYNAMIC or COMPRESSED means that InnoDB stores varchar/text/blob fields that don't fit in the page completely off-page. But other than those columns, which then only count 20 bytes per column, the InnoDB row size limit has not changed; it's still limited to about 8000 bytes per row. InnoDB only...
php,sql,symfony2,doctrine2,innodb
I think you may have some errors in variable names and 'mappedBy' properties as a result of nomenclature that does not reflect your explanation of the relationship between Member and Account. If there is a OneToMany relationship between the two, intuitively I would write class Member { /** * @ORM\OneToMany(targetEntity="Account",...
This is done with ANALYZE TABLE table_name; Read more about it here. ANALYZE TABLE analyzes and stores the key distribution for a table. During the analysis, the table is locked with a read lock for MyISAM, BDB, and InnoDB. This statement works with MyISAM, BDB, InnoDB, and NDB tables. ...
php,mysql,transactions,innodb,critical-section
The code is fine, with one exception: Add FOR UPDATE to the initial SELECT. That should suffice to block the second button press until the first DELETE has happened, thereby leading to the second one "failing". https://dev.mysql.com/doc/refman/5.5/en/innodb-locking-reads.html Note Locking of rows for update using SELECT FOR UPDATE only applies when...
Looks like you need to configure MySQL to use the correct BINLOG format. You should try to set it to MIXED or ROW (if MIXED does not work). You can learn more about this on http://dba.stackexchange.com/questions/58459/mysql-error-impossible-to-write-to-binary-log...
mysql,sql,query-optimization,innodb,mariadb
The problem was not in MySQL / MariaDB, but in OpenVZ and the IO scheduling on the server. After migrating to more productive disks the problem disappeared.
You need to join recipients only if you are not enforcing a foreign key constraint between recipients.id and emails.recipent_id, and you want to exclude recipients who are not (any longer) enlisted in the recipients table. Otherwise, omit that table from the join straight away; you can use emails.recipient_id instead of...
php,mysql,performance,innodb,mysql-slow-query-log
INSERT must update all the indexes for each row inserted. However, for a single row, we are talking milliseconds at most. INSERT ... SELECT ... can be inserting arbitrarily many rows -- either the SELECT or the INSERT could be the problem. INSERT ... VALUES (1,2,3), (4,5,6), ... (a 'batched'...
mysql,innodb,mariadb,information-schema
Log sequence number increases each time a client writes to InnoDB. mysql> pager grep "Log sequence number" PAGER set to 'grep "Log sequence number"' mysql> show engine innodb status\G Log sequence number 243755747560 1 row in set (0.00 sec) To know which table was modified you can scan the REDO...
Please Try following query INSERT INTO tb_production(id,col1,col2) SELECT bkp.id,bkp.col1,bkp.col2 FROM tb_backup bkp LEFT JOIN tb_production prd ON bkp.id=prd.id WHERE prd.id IS NULL ORDER BY bkp.id LIMIT 2000; ...
You can probably succeed. But it is not wise. Something random (eg, a network glitch) could come along to cause that huge transaction to abort. You might be blocking other activity for a long time. Etc. Are the "old" records everything older than date X? If so, it would much...
SELECT service.code, line, MAX(service.region) AS region FROM service INNER JOIN pattern ON pattern.service = service.code WHERE region IN ('Y', 'EM') GROUP BY service.code, line HAVING SUM(region IN ('Y', 'EM')) = 1; The HAVING clause "counts" the rows per group and filters those where multiple regions were found. The aggregate function...
Information about a table is stored in two places: Server-wide table.frm file Storage-engine specific InnoDB dictionary These two must be in-sync, but there is no reliable mechanism to enforce this consistency. Due to number of reasons InnoDB dictionary gets out of sync. In your case there is an orphaned record...
Ok, I have googled and experimented a bit. I set the following values in my.cnf: innodb_buffer_pool_size = 768M sort_buffer_size = 64M key_buffer_size = 64M read_buffer_size = 64M Which brings my query down to 20 minutes which is ok for me. Some references: http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/ http://www.mysqlperformanceblog.com/2010/10/25/impact-of-the-sort-buffer-size-in-mysql/ There is also a tool: http://mysqltuner.com/...
php,mysql,wamp,innodb,wampserver
MySQL server by default accepts connections from tpc port 3306. So both your MySQL Server's are trying to use the same port THATS NOT ALLOWED so which ever MySQL server starts second will FAIL because it cannot get access to port 3306. As WAMPServers MySQL Server is configured to only...
No, it does not appear that you can create a table like another table without partitions, if it is already partitioned, in one command as you suggested above. The partition is part of the table definition and is stored in the metadata. You can check that by executing show create...
mysql,amazon-web-services,innodb,amazon-rds,errno
DDL in InnoDB is not transactional so it's possible that information in a .frm file and the InnoDB dictionary is different. In your case it looks like the .frm file is missing but there is an orphaned record in the dictionary (well, actually records in few dictionary SYS_* tables). You...
mysql,transactions,innodb,database-schema,ddl
BEGIN in some contexts is the same as START TRANSACTION. phpmyadmin performs one query at a time unless you put the batch in the window for such. Try that. (And SET autocommit = 0 is irrelevant because of the explicit START.) Edit Since the "transaction" really had a DDL statement,...
When InnoDB updates a record there are two paths possible: if new record size is the same (which the case for your table btw) then the new record is written at the same position as the old one. I.e. the new record overwrites the old one. If record size is...
Consider using a different data type, eg TEXT can store up to 65535 characters. A good alternative for storing notes. See: What is the MySQL VARCHAR max size?...
You've missed calculating all of InnoDB's overhead for storing these rows. You should have: 4 (INT) + 4 (INT) + 1 (TINYINT) + 1 (TINYINT) + 4 (INT) + 4 (TIMESTAMP) + 1 (Null bitmap, rounded up to whole bytes) + 5 (Row header) + 6 (ROW_ID: Implicit cluster key,...
You miss the parenthesis: CREATE TABLE IF NOT EXISTS product_guest_resale ( id_guest varchar(50) NOT NULL, id_product varchar(100) NOT NULL, amount int(100) NOT NULL, PRIMARY KEY (id_guest, id_product), FOREIGN KEY (id_guest) REFERENCES guest(id_guest), FOREIGN KEY (id_product) REFERENCES product(id_product) ) ENGINE=InnoDb DEFAULT CHARSET=latin1; ...
php,mysql,database,innodb,auto-increment
You want a 2-part PRIMARY KEY where the second is an AUTO_INCREMENT that resets for each change in the first part? This is available directly in MyISAM, but not in InnoDB. It can be simulated. CREATE TABLE ... ( part1 ..., part2 TINYINT ZEROFILL UNSIGNED NOT NULL, ... PRIMARY KEY(part1,...
php,mysql,database,innodb,normalization
I would suggest creating a View which combines the two tables. In order to find out which fields match most closely, I would recommend using either a "Levenshtein" distance, or something a big smarter like "Jaro/Winkler". I went through something similar to this a while ago and I blogged about...
That's easy to verify. My gut feeling told the secondary key on b would be b a b, but InnoDB is smart enough to omit the redundant b: mysql> create table t1 (a varchar(32), b varchar(32), primary key (a,b)); mysql> alter table add index(b); mysql> insert into t1 values('aaa','bbb'); Query...
mysql,innodb,self-referencing-table
If your fiddle is correct, you should be able to do this: SELECT * FROM comments WHERE comment_id__child IS NULL AND user_id=1; This works if you always populate the comment_id__child for 'parent' comment when editing it. ...
python,mysql,innodb,mysql-python
For a single result: SELECT * FROM table1 ORDER BY ABS(DATEDIFF(table1.Expiration, '2015-06-02')) ASC LIMIT 1; If you're worried about having a "tie": SELECT * FROM table1 WHERE ABS(DATEDIFF(table1.Expiration, '2015-06-02')) = ( SELECT MIN(ABS(DATEDIFF(table1.Expiration, '2015-06-02'))) FROM table1 ); Note however, these queries will never be fast; the 2nd requires every row...
mysql,indexing,primary-key,innodb
InnoDB stores the table rows in a clustered index based on the primary key. So, the data_length shows the size of pages occupied by the primary key. index_length shows the size of pages occupied by secondary (non-primary) indexes. Re your comment: Yes, it's unnecessary in this table to create an...
java,mysql,jdbc,innodb,isolation-level
My interpretation of http://dev.mysql.com/doc/refman/5.5/en/set-transaction.html#isolevel_repeatable-read is such that once you have read the data, it will be the same for that transaction for subsequent reads If you waited to run countRows(con2) until after con1.commit() then you would see 1 since it hasn't been read yet I don't think there is any...
This is not a proper usage of AUTO_INCREMENT. If you want something special like you describe, do it yourself. MyISAM has a feature like that (the second column in a PRIMARY KEY could be AUTO_INCREMENT like that). But InnoDB is preferred these days. A way to simulate it is in...
The InnoDB data dictionary can grow without bounds as you open many tables, beyond innodb_additional_mem_pool_size, and it often does grow huge if you have thousands of tables. This would be independent of the number of connections. I've seen other people report that MySQL 5.6 has a lot of memory usage,...
This looks like an issue with "case sensitivity" in table names. It looks like table names are case sensitive in your web hosting environment, but not case sensitive on your localhost. Reference: 9.2.2 Identifier Case Sensitivity https://dev.mysql.com/doc/refman/5.5/en/identifier-case-sensitivity.html To avoid problems caused by such differences, it is best to adopt a...
I would really recommend using either Percona MySQL or MariaDB. Both have tools that will help you get the most out of InnoDB, as well as some tools to help you diagnose and optimize your database further (for example, Percona's Online Schema Change tool could be used to alter your...
php,mysql,innodb,full-text-indexing
ALTER TABLE foo DROP FULLTEXT old_ft_index_name, ADD FULLTEXT(this, that); ...
As the field is indexed so in case of INNODB Answer 1 only 1 row locked. Answer 2 only 2 rows locked....
The trick is to join to 'checking' twice as shown below: select c.name , c.birthplace , bplace.checktype , c.birthdate , bdate.checktype from characters c join checking bplace on c.birthplace_checking = bplace.checking_id join checking bdate on c.birthdate_checking = bdate.checking_id ...
It turns out that there was one difference between the definition of the two tables. The CHARSET was the the true culprit. Master: ... ) ENGINE=InnoDB AUTO_INCREMENT=XXXXX DEFAULT CHARSET=latin1 Replica: ... ) ENGINE=TokuDB AUTO_INCREMENT=XXXX DEFAULT CHARSET=utf8 Command required to "fix" the table before restarting the replication: ALTER TABLE database.table CONVERT...
Is it possible that phpMyAdmin is ending the client session after Go is hit and a query is submitted? That is pretty much how PHP works. You send the request, it get's processed, and once done, everything (including MySQL connections) gets thrown away. With next request, you start afresh....
php,mysql,innodb,auto-increment
Resetting the AUTO_INCREMENT will have little effect on performance. You would slightly reduce the storage size of the field by resetting. A better use of your time is running EXPLAIN on your common statements and making sure you have a proper index wherever you're trying to select specific records....