I’ve been using mysql professionally for a little over 2 years now, and have found it increasing inadequate for humongous databases. I had first hand experience with this when building out a 7 million record database for a client of mine. I did resolve some of the issues by placing the proper indexes in the table structures, but it still had underwhelming performance issues when it came to the keyword searches. As I knew the client didn’t have the funds to have me solve this issue, I just left it the way it was.
But as a person who is always trying to seek answers to my problems I felt the need to do some investigative research. How are websites like Google able to have billions and trillions of records, and still have fast and responsive queries? It took me a while, but I discovered Apache Solr as the answers to my prayers. Based on the Lucene Search Engine, Solr is able to handle databases into the billions of records. The only catch to using Solr is that you must be on a server that has java. The only servers that allow java are dedicated servers, which can cost up to $200 a month. So obviously this search application is not for your average run of the mill website, but what kind of run of the mill website needs millions or records anyways?
As most of my clients are small time operations, or just start up companies, I haven’t had the opportunity to implement Apache Solr. For anyone who is looking into developing enterprise level web applications you can find more information about Solr here. As for me, I’m just going to have to land a big client in order to play around with Apache Solr.
Does anyone have any great tutorials for Solr? If you have any, place it in the comments section.
Add this to your blog:
(Copy & paste code)