Tutorial :mysql - filtering a list against keywords, both list and keywords > 20 million records (slow)



Question:

I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword.
The 'term' field is a VARCHAR(320) and the 'keyword' field is a VARCHAR(64).

My current strategy is:

SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term   LIKE CONCAT('%', table2.keyword, '%');  

This is not working, it takes f o r e v e r.
It's not the server, afaict (see notes).

How might I rewrite this so that it runs in under a day?

I have entertained in-memory tables, or changing to innodb and making the buffer pool big enough to hold both tables. Unfortunately, each mysql thread is bound to one cpu, but I have 4 cores (well, "8" with hyperthreading); if I could distribute the workload, that would be fantastic.

Notes:

  1. Regarding server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a 2x dual core xeon 2U beast with fast sas drives and 8G of ram, tuned for the mysql workload.

  2. I just remembered that I only index the first 80 characters of the 'term' field (to save disk space); not sure if this is hurting or helping.

  3. MySQL 5.0.32, Debian Lenny x86_64


Solution:1

You want to set up a full-text index, then do a search against that. Right now, your unique index probably isn't helping the search at all (because of the leading '%' in the search).

That means, it's almost certainly running a full scan of table1 for each item in table2. Calling that grossly inefficient is putting it nicely. Building a full-text index is somewhat slow (though probably faster than what you're doing right now) but once that's done, the searching should go a lot faster.

As to what to use to do the full-text indexing: while MySQL has a built-in full-text indexing capability, I doubt it'll help you a lot -- with 20 million rows, its performance is pretty poor (at least in my experience). Sphinx is a bit more work to set up, but is a lot more likely to give you adequate performance.


Solution:2

for first you should normalize your schema: you should make 3rd table to keep relation between terms and keywords in the manner of term_id <-> keyword_id, not like you doing this now - in char field separated by spaces


Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Previous
Next Post »