Category Archives: Programming

ElasticSearch Mock Solr Plugin

I just released an ElasticSearch plugin that Mocks the Solr interface. With this you can use tools and clients that are meant to talk to Solr with ElasticSearch. Some examples are Nutch, Apache ManifoldCF, SolrJ apps, etc. Currently, indexing and deleting of documents is supported 100% for XML (/update request handler) and JavaBin (/update/javabin request handler). Basic support for the Solr search handler (/select) is also included for the Solr q, start, rows, and fl parameters. The q parameter supports 100% of the lucene query syntax. Both XML and JavaBin response formats are supported.

To use the plugin:

  1. 1. Install
  2. $ES_HOME/bin/plugin install mattweber/elasticsearch-mocksolrplugin/1.0.0
  3. 2. Update your client code to point at ElasticSearch and the /_solr REST endpoint.

    Specifying the index and type is optional and will default to “solr” for index, and “docs” for type.

  4. 3. Use your Solr client as normal.

I have tested the plugin with Nutch and various SolrJ test code. Using Nutch with ElasticSearch is the reason I wrote this plugin. Instead of extending Nutch to support ElasticSearch as an endpoint, I figured it would be much better to support any tool trying to talk to Solr. This plugin should greatly reduce the effort in testing and/or replacing Solr with ElasticSearch. It also opens the doors for using tools that were previously not available to ElasticSearch users.

Source available on GitHub:

Solr For WordPress on GitHub

I put Solr For WordPress on GitHub. This is the latest code for 0.3.0.

Running Specific Solr Unit Tests

Just realized that as of 09/17/09 and revision 816090 of Solr you can now run specific unit tests instead of everything at once. This makes a developers life (mine) much easier because you no longer need to wait for all Solr’s tests to run just to test your particular piece of code.

To run specific testcase:

ant -Dtestcase=<CLASS NAME> junit

To run all tests for a specific package:

ant -Dtestpackage=<PACKAGE NAME> junit

To run all root tests of a specific package:

ant -Dtestpackageroot=<PACKAGE ROOT> junit

Solr for WordPress 0.2.0 Released

I just released Solr for WordPress 0.2.0. This release completely replaces the default WordPress search without any special setup. I have also added i18n support so people can translate it into different languages, integrated it into the default WordPress theme, and added support to enable or disable specific facets. This release should make it much easier for people to get setup and working correctly. As usual, please let me know of any bugs you might find by opening a report at

Download here.

Solr AutoSuggest with TermsComponent and jQuery

I needed to implement an autosuggest/autocomplete search box for use with Solr. After a little research, I found the new TermsComponent feature in Solr 1.4. To use TermsComponent for suggestions, you need to provide set the prefix and lower bound to the input term and make the lower bound exclusive. Use the terms.fl parameter to set the source field. This means:

  • Set terms.lower to the input term
  • Set terms.prefix to the input term
  • Set terms.lower.incl to false
  • Set terms.fl to the name of the source field

Your resulting query should look something like this:


Note: This assumes you are using the default solrconfig.xml for Solr 1.4

In the example above I used “py” for my input term. You will then get output that looks similar to this:


Now that we have TermsComponent setup and working correctly its time to create the autosuggest/autocomplete search box. Since I am not one to reinvent the wheel, I did a quick search and found a jQuery UI plugin for autocomplete. The search frontend I was developing was already using jQuery, so this plugin was a perfect fit.

This autocomplete plugin is not in the current release of jQuery UI so I needed to grab it from their subversion repository. You can find instructions where to get it here.

The plugin supports AJAX calls for the data source. It expects the data source to return each suggestion on it’s own line, for example:


As you saw above, this is not what direct output from Solr looks like. On top of this, it is not a good idea to expose your backend server via your frontend code. Time to write a java servlet.

Unfortunately the java client for Solr, SolrJ, didn’t support TermsComponent yet. I decided to add this support, so please see this post for information on my patch.

Assuming you are using a version of SolrJ with my patch, here is a simple servlet that provides the functionality we need:

protected void doGet(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException {
        String q = req.getParameter("q");
        String limit = req.getParameter("limit");
	PrintWriter writer = res.getWriter();
	List<Term> terms = query(q, Integer.parseInt(limit));

	if (terms != null) {
		for (Term t : terms) {

And the query method:

private List<Term> query(String q, int limit) {
    List<Term> items = null;
    CommonsHttpSolrServer server = null;

     try {
         server = new CommonsHttpSolrServer("http://localhost:8983/solr");
     } catch(Exception e) { e.printStackTrace(); }

     // escape special characters
     SolrQuery query = new SolrQuery();

     try {
         QueryResponse qr = server.query(query);
         TermsResponse resp = qr.getTermsResponse();
         items = resp.getTerms("spell");
     } catch (SolrServerException e) {
      	items = null;

     return items;

Now you may be wondering why I used the “q” and “limit” parameters. I use these because this is what the jQuery autocomplete plugin sends to the servlet. “q” is the input term, and “limit” is the max number of suggestions to return.

Now to hook everything together. Insert the following javascript into the head of your search page and replace “#searchbox” with the id of the input box you want to use for autocompletion. Also insert the correct url to your servlet.

        	$(document).ready(function() {

        		$("#searchbox").autocomplete({ url: 'completion',
        			 max: 5,

Update your css file with required jQuery UI css:

/* Autocomplete
.ui-autocomplete {}
.ui-autocomplete-results { overflow: hidden; z-index: 99999; padding: 1px; position: absolute; }
.ui-autocomplete-results ul { width: 100%; list-style-position: outside; list-style: none; padding: 0; margin: 0; } 

/* if  the width: 100%, a horizontal scrollbar will appear when scroll: true. */
/* !important! if line-height is not set, or is set to a relative unit, scroll will be broken in firefox */
.ui-autocomplete-results li { margin: 0px; padding: 2px 5px; cursor: default; display: block; font: menu; font-size: 12px; line-height: 16px; overflow: hidden; border-collapse: collapse; }
.ui-autocomplete-results li.ui-autocomplete-even { background-color: #fff; }
.ui-autocomplete-results li.ui-autocomplete-odd { background-color: #eee; }

.ui-autocomplete-results li.ui-autocomplete-state-default { background-color: #fff; border: 1px solid #fff; color: #212121; }
.ui-autocomplete-results li.ui-autocomplete-state-active { color: #000; background:#E6E6E6 url(images/ui-bg_glass_75_e6e6e6_1x400.png) repeat-x; border:1px solid #D3D3D3; }

.ui-autocomplete-loading { background: white url('images/ui-anim.basic.16x16.gif') right center no-repeat; }
.ui-autocomplete-over { background-color: #0A246A; color: white; }

Congratulations! You should now have a working Solr-based autocomple search box!
Solr AutoCompletion

SolrJ TermsComponent Support

I was working on implementing an auto-complete search box today using Solr 1.4 and the new TermsComponent. TermsComponent is a simple plugin that provides access to Lucene’s term dictionary and is very fast. Being fast and the fact it can hook into a search index makes it perfect for an auto-completion server.

Unfortunately, SolrJ does not support this new functionality yet. Well not officially because you could always parse the raw response object yourself. That is exactly what I was doing until I figured I might as well just add the support to SolrJ. I did, and it was extremely easy.

I added support for TermsComponent parameters and implemented a new TermsComponent response type. The TermsComponent response is parsed into a list of Type objects. The Type object has two methods, getTerm() and getFrequency(). getTerm() returns the suggested term, and getFrequency() returns the frequency of the term appearing in the index.

I have submitted my patch upstream for inclusion into a future version of SolrJ.

Here is the link to the JIRA bug report:

Here is the patch:

Solr for WordPress

Solr for WordPress
Solr for WordPress is a WordPress plugin that interacts with an instance of the Solr search engine. With this plugin you can:

  • Index pages and posts
  • Perform advanced queries
  • Enable faceting on fields such as tags, categories, and author
  • Treat the category facet as a taxonomy
  • Add special template tags so you can create your own custom result pages to match your theme
  • Configuration options allow you to select pages to ignore, features to enable/disable, and what type of result information you want output.
  • Hit highlighting
  • Dynamic result teasers

Solr for WordPress requires WordPress 2.7 or greater and an instance of Solr 1.3 or greater. Installation is simple, just extract the plugin in your WordPress plugins folder, activate it, then point it at your Solr instance via the configuration page. From there, you can index all your pages and/or posts and you are ready to perform searches against your WordPress data.

This plugin assumes your Solr schema contains the following fields: id, permalink, title, content, numcomments, categories, categoriessrch, tags, tagssrch, author, type, and text. The facet fields (categories, tags, author, and type) should be string fields. You can make tagssrch and categoriessrch any type you want as they are used for general searching. The plugin is distributed with a Solr schema you can use. I will eventually package up a version of Solr configured specifically for this plugin. Until then, the provided schema will have to do.

Integrating Solr for WordPress into your theme is quite simple as well. The plugin provides two template tags, one for a search box and another for search results. For the search box, use the s4w_search_form() tag. For the search results use the s4w_search_results() tag. These template tags output valid xhtml that you can style with css.

This version of the plugin requires you to create your own search page template then create a search page called “Search” using this template. It also requires you to manually update any search forms to search against the search page you just create (“/search/”) and putting the query parameters in the “qry” parameter. In future versions it will completely replace the standard WordPress search functionality.

By default, facting is enabled for the category, tags, author, and post type. Faceting allows your user to drill down into the search results filtering on values of the particular facet. The category facet can be treated as a taxonomy as well.

Released Solr for WordPress 0.2.0

Plugin Home: Solr for WordPress
Download: Solr for WordPress 0.1.0
WordPress Hosted Plugin Page: Solr for WordPress

New Design

I have finally decided to update my site. I have been pretty busy with work the last couple years and have been slacking on the site. Well, I got an itch to start working on it again and decided to kick off my renewed interest with a completely new design. The design is a heavily modified version of the Elixir theme by Michael Whalen.

Along with the new design I have integrated Solr search. Solr is an amazing search engine. I wrote a plugin called Solr for WordPress that handles all the integration between Solr and WordPress. I will be writing a post about the plugin soon.

Comment Spam

Within a week of switching to WordPress for my blogging software, I started receiving a lot of comment spam. I found this amazing because I have had a blog for a few years now without any problems. I have had the occasional spam comment, but lately I have been receiving 3-7 of them a day. I know this is very little compared to high-volume sites, but seems like a lot for a small site like mine. For the most part, the Akismet spam plugin WordPress ships with does an amazing job. It has let a few slip by, but that is no big deal.

This whole comment spam problem reminded me of a research paper I read a year or so ago. It was called Defending Against an Internet-based Attack on the Physical World. It was about the threat of using api’s such as Google’s SOAP API to automate filling out request forms for catalogues and other material on thousands of sites to some victim. This would cause the victim’s physical mail to become overloaded and very hard to manage. Imagine 100′s or 1,000′s of pieces of mail being delivered to your house every day. The point of this being that I figure spammers are using a technique similar to this to find WordPress blogs, then spam them automatically.

I decided to see how easy it was. First I went to see if I could sign up for Google’s SOAP API, but I found out that they no longer offer this service. Without this service, it is going to be a lot harder to get this done. Ignoring the whole api problem, I decided to find a search string to find comment pages on WordPress blogs. I was amazed at how easy this was. I just went to a blog using the default WordPress theme and looked for keywords that would always be there. After about a second I came up with this search string:

"Leave a Reply" Name Mail Website "proudly powered by WordPress"

Typing this into google found over 1,000,000 pages! Clicking a few of these verified that they were infact WordPress comment pages. Now I needed to write a program to automate parsing these links. Without the search api, I was stuck doing it manually. After about an hour I came up with this python script. This script will submit the search string I generated above to google, parse the first 100 results from the page, then submit a search for the next 100 and so on. While testing this script I noticed google started blocking my search, which is a good thing. I found a way around this by using different User-Agent strings and adding some timeouts. Because of this, the script defaults to saving the first 100 links. I have left out the code to fill out the comment forms becuase I feel that piece of code would do more harm than good.

Anyways, I think there is a huge problem with comment spam that needs to be fixed. The fact that so many pages can be found in a single search is amazing. Google blocking querys when it detects a bot is definitely a step in the right direction. The fact that I was able to get around this so easily is not.