If you just want to have all the terms be required regardless of the order or proximity in which they appear, that's a simple fix. Just add: parser.setDefaultOperator(QueryParser.Operator.AND); If all of your queries will start at the beginning of the field you wish to match, then you can change the...
c#,search,orchardcms,lucene.net
You should use right search field name. Search field name for CreatedUtc is created. To find the right search field name you can: Search source code and view usages of IContentHandler.OnIndexing<TPart> method. Lunch luke 4.0 and open ~/App_Data/Sites/Default/Indexes/YouIndexName folder. https://code.google.com/p/luke/downloads/detail?name=lukeall-4.0.0-ALPHA.jar&can=2&q= ...
c#,asp.net,.net,lucene,lucene.net
You can use the Luke tool, which can open a Lucene index and let you peek at the contents. It's very instructive. I'm not including a link to the tool here, because it's poorly maintained. You have to download a Luke version matching your Lucene version - and several people...
linq,ravendb,lucene.net,contains
Not sure this is what you're looking for, but this is using ravendb (build 3548) DocumentQuery wich takes a lucene query in the where statement: using (var session = _documentStore.OpenSession()) { var result = session.Advanced .DocumentQuery<Events>() .Where("Details: *test*") .ToList(); } http://ravendb.net/docs/article-page/2.0/csharp/client-api/querying/query-and-lucene-query Edit: This might not be very effective in terms...
This is probably happening because there are no terms in your index that match "gloves*". When a MultiTermQuery is rewritten, it finds the Terms that are suitable, and creates primitive queries (such as TermQuery) on those terms. If no suitable terms are found, you'll see an empty query generated instead,...
This has nothing to do with node ID being -1, Root node of Umbraco is always -1. The reason for your error is you have segment( for e.g. segment_c file) files with 0k capacity in your index folder ~/App_Data/TEMP/ExamineIndexes/Internal. You should delete those files with empty segment and everything will...
lucene,lucene.net,morelikethis
The only thing you are doing wrong is thinking that Lucene scores are percentages. They aren't. Document scores for a query are to be used to compare the strength of matches within the context of that single query. They are quite effective at sorting results, but they are not percentages,...
Not sure about Lucene, but you can do what you ask for with a MultiMap index: public class CustomerAndBusiness_ByName : AbstractMultiMapIndexCreationTask<CustomerBusiness> { public CustomerAndBusiness_ByName() { AddMap<Business>(businesses => from business in businesses select new { business.Id, business.Name }); AddMap<Customer>(customers => from customer in customers select new { customer.Id, customer.Name }); Index(x...
asp.net-mvc,c#-4.0,lucene,lucene.net
For me it was two issue, 1st this is index was not created so for the 1st time it was throwing this error, Following code resolved that issue. if (!System.IO.Directory.EnumerateFiles(indexDirectory).Any()) { return new List<Model>(); } 2nd thing, Do not forget to dispose object of IndexSearcher,IndexReader and IndexWriter. Disposing memory of...
jquery,autocomplete,lucene.net,jquery-1.7,jquery-1.10
Finaly after my searches I find best Solution here : https://github.com/jquery/jquery-migrate/...
The Lucene-Gosen analyzer does not appear to be ported to Lucene.Net. You can make a request on their github page or you could help them out by porting it and submitting a pull request. Once that analyzer exists and using the article here - using their basic code, just change...
Don't know which version of Lucene you are using, but assuming you are using Java and version 4.0+, you should open IndexWriter with APPEND mode, configured in IndexWriterConfig. If you are using .net, there should be a close counterpart.
lucene,lucene.net,windows-azure-queues
The IndexWriter is thread-safe, it's safe to call from different threads. It's okay to never call optimize. (You could write a custom merge policy if the default doesn't work for you.) You will flush all documents to disk by calling commit. There's no need to dispose of your writer....
Two things: A.) You're setting a long value with an int. I'm guessing this probably is leading to some wonky mishap a level below. Stop doing that. status.SetLongValue(statusCode); becomes status.SetIntValue(statusCode); B.) Don't parse a string to get an int. Basically QueryParser is the wrong choice here. Where you have: var...
full-text-search,sql-azure,lucene.net
I also ran into that problem while migrating to Azure and have ended up with that same permissions model. Since your userIds are integers and won't have special characters then you can rely on many of the Lucene(.net) analyzers like StandardAnalyzer and WhitespaceAnalyzer to split a list of IDs into...
I resovled the problem. Our site is missing a scheduled task. I added that back on, and it is running every minute, hence able to picking up the updated content.
Try using SimpleAnalyzer when you index your documents, as well as when you search. It's usually a good idea to keep analysis at index and query time the same until you have a good reason to do otherwise. using ( IndexWriter writer = new IndexWriter(FSDirectory.Open("index"), new SimpleAnalyzer(), true, IndexWriter.MaxFieldLength.LIMITED)) With...
The QueryParser doesn't support that. You can construct such a query using the SpanQuery API: SpanQuery firstwordQuery = new SpanTermQuery(new Term("myField", "system")); //Unfortunately, Lucene.Net doesn't have SpanMultiTermQueryWrapper... SpanQuery secondwordQuery = new SpanRegexQuery(new Term("myField", "clean.*")); SpanQuery[] spanClauses = new SpanQuery[] {firstwordQuery, secondwordQuery}; Query finalQuery = new SpanNearQuery(spanClauses, 0, true); ...
c#,.net,asp.net-mvc,lucene.net
I've managed to get this working just fine with the help of a little helper class I wrote for the project. The factory class below shares a single indexReader with any number of threads and ensures that the reader returned from GetCurrentReader() is kept in sync with the state of...
I'll characterize what SO is doing a bit more closely here: While I'm not really privy to the implementation details of StackOverflow, you'll note the same behavior when searching for "java" or "hibernate", even though these would have no issues with standard analyzer. They will be transformed into "[java]" and...
I've just recreated this program in Java Lucene (4.4) and I dont see any issue in numeric range query. 1) 3 Documents field:0 - value:137 field:1 - value:41 field:2 - value:908 field:3 - value:871 field:4 - value:686 field:0 - value:598 field:1 - value:623 field:2 - value:527 field:3 - value:364 field:4...
I'm guessing you are using StandardAnalyzer when indexing your terms, and then are searching without analysis in some form, or with a different form of analysis. The 2.9 StandardAnalyzer (ClassicAnalyzer, as of version 3.1) has some interesting behavior around hyphens. To quote the StandardTokenizer documentation: Splits words at hyphens, unless...
You can use query parser it's better approach in lucene to add document and I used this to create index in many projects, doc.Add(new Field("Id", searchResult.Id,Field.Store.YES, Field.Index.ANALYZED_NO_NORMS)); ...
The Advanced Database Crawler module is no longer supported or required in Sitecore 7. Instead there is a new built in ContentSearchManager API, you should use this instead. Start by reading the Developer’s Guide to Item Buckets and Search and Search and Indexing Guide on SDN and also take a...
I have looked around a bit and tried different things. What I have done and it works is the following: sort = new Sort(new SortField("Distance", SortField.SCORE, false)); True to get the closest first, false otherwise....
You can change the lock timeout by increasing IndexWriter.WRITE_LOCK_TIMEOUT, or setting it to -1 to wait forever. There's also a Lock.LOCK_POLL_INTERVAL that states how often Lock.Obtain should attempt to retrieve the lock.
I ended up by actually porting the ComplexPhraseQueryParser from Java to C#. It was a lot easier than expected and was a good excercise for learning C# a bit better. I have provided the code below in case it is helpfull to anyone else. Please note that it is still...
First thing, the most useful tool you should be aware of when trying to understand why something is being scored a certain way: IndexSearcher.Explain Explanation explain = indexSearcher.Explain(parsedQuery, hits.ScoreDocs[0].Doc); Whichs gives us a detailed explaination of how that score was arrived at. In this case, the two different scoring queries...
lucene,lucene.net,prefix,information-retrieval
You need to reindex using your custom analyzer. Applying a stemmer only at query time is useless. You might kludge together something using wildcards, but it would remain an ugly, unreliable kludge.
Got it. IndexReader.Reopen() returns a reopened instance of the reader on which the method was called, while this stays as it is. Thus, the code needs to be modified like this: _enIndexWriter.DeleteDocuments(query); _enIndexWriter.Commit(); _enIndexReader = _enIndexReader.Reopen(); _enIndexSearcher = new IndexSearcher(_enIndexReader); ...
I found the answer, it is into the following great article http://searchhub.org/2011/12/28/why-not-and-or-and-not/ What I did to solve my problem is: Search by Lucene_KsuDmsDesc = A Search by Lucene_EmployeeNO = B Search by Lucene_CreatedBy = C Search by Lucene_CreatedDateTime = D Search by Lucene_CategoryID = E The formula is (A and...
search,elasticsearch,lucene.net,nest,elastic
As said in the documentation, you can provide multiple possible inputs by indexing like this: curl -X PUT 'localhost:9200/music/song/1?refresh=true' -d '{ "description" : "Breakfast,Sandwich,Maker", "suggest" : { "input": [ "Breakfast", "Sandwitch", "Maker" ], "output": "Breakfast,Sandwich,Maker" } }' This way, you suggest with any word of the list as input. Obtaining...