Mike kindly started the presentation with a consuming warning, letting us know in advance that he was going to be pimping JIRA (because this was going to be case study-esque).
These days JIRA uses Lucene for “Generic Data Indexing”: Fast retrieval of complex data object. This isn’t about text searching for “dog” sorted by relevance. The statistic pages all come back from a Lucene index, not from the DB.
Lucene has a way for you to write your own Sort routines via Sort
, SortField
.
I have seen the “viral Lucene” pattern apply in a variety of projects. You start out using it for /search, and then you see that you can use it for other things. Slowly your DB is doing less, and your Lucene indexes are growing. This is a killer open source project, even if the API is a little weird.
Hadoop: Open Source MapReduce
I had a couple of people ask “why Google hasn’t open sourced our MapReduce?” They didn’t know about Hadoop:
Hadoop is a framework for running applications on large
clusters of commodity hardware. The Hadoop framework
transparently provides applications both reliability and data
motion. Hadoop implements a computational paradigm named
map/reduce, where the application is divided into many small
fragments of work, each of which may be executed or reexecuted
on any node in the cluster. In addition, it provides a
distributed file system that stores data on the compute nodes,
providing very high aggregate bandwidth across the cluster. Both
map/reduce and the distributed file system are designed so that
node failures are automatically handled by the framework.The intent is to scale Hadoop up to handling thousand of
computers. Hadoop has been tested on clusters of 600
nodes.Hadoop is a Lucene sub-project
that contains the distributed computing platform that was
formerly a part of Nutch. This
includes the Hadoop Distributed Filesystem (HDFS) and an
implementation of map/reduce.For more information about Hadoop, please see the Hadoop wiki.
The great efforts of Christophe Bisciglia of the open source group revolve around UW classes where Hadoop is used in the curriculum.
March 22nd, 2007 at 9:07 pm
Do you think Ferret provides an equivalent pattern for Ruby apps? My understanding is that Ferret started as a Lucene port but has moved further and further from Lucene in implementation. Do you think this is the case and if so is that a good thing, bad thing, or irrelevant thing?
I’ve been thinking about testing out Ferret, but so far have not. Maybe it’s about time?
March 22nd, 2007 at 9:14 pm
And there is Solr: http://lucene.apache.org/solr/
Worth checking out too. Ferret has moved away a bit, which is a shame from the standpoint of index compat, but good if it is more ruby-y (if that is all you care about)
Cheers,
Dion
March 28th, 2007 at 12:26 am
Solr indeed! And now with more Ruby goodness with the solr-ruby library we’ve developed: http://wiki.apache.org/solr/solr-ruby
We could use an acts_as_solr built in, though there is already an acts_as_solr at RubyForge which may do the trick until we roll it into solr-ruby proper.
And don’t forget Flare: http://wiki.apache.org/solr/Flare as demonstrated on several datasets here: http://code4lib.org/2007/hatcher
p.s. Hey Dion!
March 28th, 2007 at 12:30 am
Re: Ferret – it’s a good thing. There were very well considered decisions that forked it from the Java Lucene file format. The creator of Ferret is collaborating with the KinoSearch creator on the Lucene Lucy project in order to bring their goodies back to the Lucene community.
May 19th, 2008 at 12:15 am
thaks for this helpful info.
June 25th, 2008 at 12:13 am
yes.good helpful to me.thank you!