javascript,jquery,elasticsearch
This is because the binding you've used (i.e., 'terms.region.slug') is incompatible with dot-notation. Use of dot-notation requires that the binding be parseable as an identifier. However, bracket notation is equivalent to dot-notation and allows any binding to be used. console.log( filter.and[0].term['terms.region.slug'] );...
database,post,asynchronous,elasticsearch,get
To ensure data is available, you can make a refresh request to corresponding index before GET/SEARCH: http://localhost:9200/your_index/_refresh Or refresh all indexes: http://localhost:9200/_refresh ...
term filters don't analyze the text to be searched. Meaning, if you search for 000A8D810F5A, this is exactly what is searching for (upper-case letters included). But, your macAddr and insturmentName fields and others are just strings. Meaning, they use the standard analyzer which lowercases the terms. So, you are searching...
Write your query like this, i.e. sort needs to go to the top-level and not nested in the querypart: { "query": { "filtered": { "query": { "query_string": { "fields": [ "description" ], "query": "sun" } }, "filter": { "term": { "permissions": 1 } } } }, "sort": [ { "likes_count":...
Try this (with lowercase value for the terms bool): { "query": { "bool": { "must": [ { "match_all": {} } ], "must_not": { "terms": { "interdictions": [ "s2v" ] } } } }, "from": 0, "size": 10 } Most probably, you have an analyzer (maybe the standard default one) that...
query_string will do just this. { "query": { "query_string": { "query": "example \"hello world\" elasticsearch", "default_operator": "AND" } } } ...
c#,.net,database,elasticsearch,nest
You can use filtered query with term filter: { "filtered": { "query": { "match_all": { } }, "filter": { "bool" : { "must" : [ {"term" : { "macaddress" : "your_mac" } }, {"term" : { "another_field" : 123 } } ] } } } } NEST version (replace dynamic...
elasticsearch,playframework,elastic4s
I guess once again RTFM is in order here: The docs states: IMPORTANT: The regular expression should match the token separators, not the tokens themselves. meaning that in my case the matched token www.gravatar.com will not be a part of the tokens after analyzing the field. Instead use the Pattern...
Elasticsearch is "near real-time" by nature, i.e. all indices are refreshed every second (by default). While it may seem enough in a majority of cases, it might not, such as in your case. If you need your documents to be available immediately, you need to refresh your indices explicitly by...
You can build SearchDescriptor incrementally as under. I've used aggregations instead of facets (which are deprecated now) but I hope you get the idea. var sd = new SearchDescriptor<MyData>(); sd = sd.QueryRaw(<raw query string>); if (<should sort>) { string fieldToBeSortedOn; // input from user bool sortInAscendingOrder; // input from user...
python,python-2.7,elasticsearch
You could use the and filter for that purpose, and AND the two bool/should filters, like this: { "query": { "filtered": { "filter": { "and": [ { "bool": { "should": [ { "term": { "field1": "" } }, { "missing": { "field": "field1" } } ] } }, { "bool":...
Did you try to use extension method Suffix? This is how you can modify your query: ... .OnFields( f => f.ScreenData.First().ValueString, f => f.ScreenData.First().ValueString.Suffix("english")) .Type(TextQueryType.BestFields) ... Hope it helps....
I don't have a good explanation for this, but this query works for me in Groovy (had to enable logging in the script to see what _score is containing): multiplier * _score.score() ...
elasticsearch,docker,kibana,kibana-4
Can I do that with this image itself? yes, just use Docker volumes to pass in your own config files Let say you have the following files on your docker host: /home/liv2hak/elasticsearch.yml /home/liv2hak/kibana.yml you can then start your container with: docker run -d --name kibana -p 5601:5601 -p 9200:9200...
sorting,elasticsearch,group-by,order
Edit to reflect clarification in comments: To sort an aggregation by string value use an intrinsic sort, however sorting on non numeric metric aggregations is not currently supported. "aggs" : { "order_by_title" : { "terms" : { "field" : "title", "order": { "_term" : "asc" } } } } ...
This is the list of commands that I tested in ES 1.6.0: PUT /test { "mappings": { "_default_": { "dynamic_templates": [ { "murmur3_hashed": { "mapping": { "index": "not_analyzed", "norms": { "enabled": false }, "fielddata": { "format": "doc_values" }, "doc_values": true, "type": "string", "fields": { "hash": { "index": "no", "doc_values": true,...
elasticsearch,lucene,solr-boost
The _boost field (document-level boost) was removed, but field-level boosts, and query boosts still work just fine. It looks like you are looking for query boosts. So, if you wanted to boost matches on field1: "bool": { "should": [{ "terms": { "field1": ["67", "93", "73", "78", "88", "77"], "boost": 2.0...
elasticsearch,docker,dockerfile,kibana-4
The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself. When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles. Since your CMD only starts gunicorn,...
You need to re-create the mapping for your index and to re-index all the documents in it. You don't have doc_values enabled for that field: "doc_values": true. But you want your fielddata format to be doc_values. This is not possible. To be able to change the fielddata format from whatever...
Nesting [bool][must][bool][should] isolating "minimum_should_match" to only the list (array of objects) being searched on. See the below example. { "query" : { "bool" : { "must" : [ { "multi_match":{ "query":"disease resistant", "type":"cross_fields", "fields":[ "description", "planting", "maintenance", "name" ], "tie_breaker":0.3 } }, "bool" : { "should" : [ {"match" :...
This would be the list of stopwords for the standard analyzer: http://grepcode.com/file/repo1.maven.org/maven2/org.apache.lucene/lucene-analyzers-common/4.9.0/org/apache/lucene/analysis/core/StopAnalyzer.java?av=f#50 50 static { 51 final List<String> stopWords = Arrays.asList( 52 "a", "an", "and", "are", "as", "at", "be", "but", "by", 53 "for", "if", "in", "into", "is", "it", 54 "no", "not", "of", "on", "or", "such", 55 "that", "the", "their", "then",...
The limit filter doesn't limit the number of documents that are returned, just the number of documents that the query executes on each shard. If you want to limit the number of documents returned, you need to use the size parameter or in your case the Python slicing API like...
You will need to setup a multifield as the dash is causing the terms to be split. I have found an answer to a similar question which answers yours: http://stackoverflow.com/a/28859145/4134821
Apart from what @Val have mentioned you can try out the term vector,if you are intending to study the working of tokenisers.You can try out something like this just for examining the tokenisation happening in a field GET /index-name/type-name/doc-id/_termvector?fields=field-to-be-examined To know more about tokenisers and their operations you can refer...
There are a couple issues in your code: Issue 1: When you create your document in the second snippet, you're not using the correct mapping type and your body doesn't include the correct field name as declared in your mapping: client.create({ index: 'events', type: 'geo_point', <-------- wrong type body: {...
You can achieve that with a simple terms aggregation parametrized with an include property which you can use to specify either a regexp (e.g. alt.* in your case) or an array of values to be included in the buckets. Note that there is also the exclude counterpart, if needed: {...
So, it wasn't a problem with either Docker or Elastic. Just to recap, the same script throwning PUT requests at a Elasticsearch setup locally worked, but when throwning at a container with Elasticsearch failed after a few thousand documents (20k). To note that the overal number of documents was roughtly...
You can use scripting for this to be implemented. { "query": { "filtered": { "filter": { "script": { "script": "_index['nationality']['US'].tf() > 3" } } } } } Here in this scripy the array "nationality" is checked for the term "US" and the count is taken by tf (term frequency). Now...
elasticsearch,querydsl,kibana-4
I am not sure you can do this as the Discovery section already uses the timestamp aggregation. Can you explain what are you trying to do? There are ways to add customer aggregations in the visualizations. If you open up the advanced section on the aggregation in the visualization you...
ruby-on-rails-3,github,rspec,elasticsearch,circleci
The error message says circle.yml should be in the repo's root directory, but you have it the config directory.
mongodb,heroku,express,elasticsearch,bonsai-elasticsearch
Here is the answer by Bonsai support: - You could always set up a script with a curl command to index the MongoDB collections, then use cron to run the script at regular intervals. - You could also use an Elasticsearch client to index collections in some cases. So I...
There are multiple ways to conglomerate various calls into one call that will return different results depending on boolean inputs. You have 2 variables and 4 different outcomes, you have to implement logic that checks all of these somewhere, so you have build if/elseif/else blocks with the SearchDescriptors, however with...
java,elasticsearch,out-of-memory
Possible reasons (some of them): putting too much data into that memory, especially because of fielddata (used for sorting, aggregations mostly) configuration mistake, where you thought you set something for heap size, but it was wrong or in the wrong place, and your node starts with the default and that...
Ok, then what you need is simply to declare your trainings property with type: nested in your mapping, like this: { "mappings": { "person": { "properties": { "id": { "type": "string" }, "name": { "type": "string" }, "trainings": { "type": "nested", <----- add "nested" here "properties": { "attendanceDate": { "type":...
sorting,elasticsearch,aggregation
You almost had it. You just need to add an order property to your a1 terms aggregations, like this: GET myindex/_search { "size":0, "aggs": { "a1": { "terms": { "field": "FIELD1", "size":0, "order": {"a2": "desc"} <--- add this }, "aggs":{ "a2":{ "sum":{ "field":"FIELD2.SUBFIELD" } } } } } } ...
c#,mysql,database,elasticsearch,nest
You can find the id values from the ISearchResponse (based on your code example above) by looking at the objects in the Hits collection, rather than the Documents collection. Each Hit has an Id property. In the original indexing call (assuming you're doing that individually -- not via the _bulk...
The default search type is query_then_fetch . Both query_then_fetch and query_and_fetch involve calculating the term and document frequency local to each of the shards in the index. However if you want a more accurate calculation of term/document frequency one can use dfs_query_then_fetch/dfs_query_and_fetch .Here the frequency is calculated across all the...
Props to Val's answer above. It was mostly what but with another level of nesting. Here is the mapping: { "recom_un": { "properties": { "item": { "type": "nested", "properties": { "name": { "type": "string" }, "link": { "type": "string" }, "description": { "type": "string" }, "terms": { "type": "nested", "properties":...
You can use Terms Aggregation for this. POST <index>/<type>/_search?search_type=count { "aggs": { "duplicateNames": { "terms": { "field": "EmployeeName", "size": 0, "min_doc_count": 2 } } } } This will return all values of the field EmployeeName which occur in at least 2 documents....
python,elasticsearch,elastic,elasticsearch-py
Your query is almost correct. The error your get states ...Parse Failure [Failed to parse source..., which basically means that your query is ill-formed and doesn't comply to the Query DSL. The range query needs to be combined with the match query (using a bool/must query) and both need to...
elasticsearch,facets,date-histogram
What you basically need is something like this (which doesn't work, as it's not an available feature): { "query": { "match_all": {} }, "aggs": { "docs_per_month": { "date_histogram": { "field": "date", "interval": "month", "min_doc_count": 0 }, "aggs": { "average": { "avg": { "script": "doc_count / 20" } } } }...
python,django,elasticsearch,django-haystack
You need to have the real time signal processor activated in settings: HAYSTACK_SIGNAL_PROCESSOR = 'haystack.signals.RealtimeSignalProcessor' ...
elasticsearch,logstash,elasticsearch-plugin,logstash-configuration
I think this is a non shield related issue. Check this issue: https://github.com/elastic/logstash/issues/3127 Just like the post mentions, executing the following did the trick for me: ln -s /lib/x86_64-linux-gnu/libcrypt.so.1 /usr/lib/x86_64-linux-gnu/libcrypt.so ...
Copied from the answer in https://github.com/elastic/elasticsearch-net/issues/1278: This is due to the dynamic mapping behavior of ES when it detects new fields. You can turn this behavior off by setting dynamic: false or ignore in your mapping: client.Map<Foo>(m => m .Dynamic(DynamicMappingOption.Ignore) ... ); client.Map<Foo>(m => m .Dynamic(false) ... ); Keep in...
This is an interesting set-up you're trying to achieve but not one that I would recommend in the long run as you're putting your production node under stress very often. First off, the term "development" in this case makes little sense because as far as ES is concerned, you're adding...
elasticsearch,nested,elasticsearch-plugin
You need to use inner_hits in your query. Moreover, if you want to only retrieve the matching nested document and nothing else, you can add "_source":["books"] to your query and only the matching nested books will be returned, nothing else. UPDATE Sorry, I misunderstood your comment. You can add "_source":...
Yes https://github.com/elastic/elasticsearch-net/blob/develop/src/Nest/DSL/SearchDescriptor.cs line number 135 public static void Update(IConnectionSettingsValues settings, ElasticsearchPathInfo<SearchRequestParameters> pathInfo, ISearchRequest request) { pathInfo.HttpMethod = request.RequestParameters.ContainsKey("source") ? PathInfoHttpMethod.GET : PathInfoHttpMethod.POST; } Obviously you need to have SearchRequest.RequestParameters.ContainsKey("source") return true for it to do a Get. In future. Just RTFM....
if you want to update you can do this: var response = _Instance.Update<BusinessElastic, object>(u => u .Index(elasticSearchIndex) .Type("business") .Id(result.Hits.FirstOrDefault().Id) .Doc(new Document {Column1="", Column2=""}) .DocAsUpsert() ); ...
Thanks to the comment of @Mrinal Kamboj and the answer of @Wormbo, I found my own answer: I changed the argument type to QueryContainer and if the argument is null, a new QueryMatchAll query is created, this works for me: public void ProcessQuery(QueryContainer query = null) { var searchResult =...
This occurs because ElasticSearch has no built-in type for decimals or currency, so your value is likely being converted to a float and suffering from floating point precision issues. You should be able to get around this by simply storing the value as a long (e.g. the number of cents...
I think you don't have all the requirements set, yet. Here's what I'd start with: PUT /index { "settings": { "analysis": { "filter": { "code": { "type": "pattern_capture", "preserve_original": 1, "patterns": [ "(\\p{Ll}+|\\p{Lu}\\p{Ll}+|\\p{Lu}+)", "(\\d+)" ] } }, "analyzer": { "code": { "tokenizer": "pattern", "filter": [ "code", "lowercase" ] } }...
ruby,ruby-on-rails-4,elasticsearch
UPDATE (previous answer removed because it was wrong) After reading a bit about elastic search I think this will work for you def to_es_query { query: { bool: { must: musts } } } end def musts @musts ||= [{ match:{ title_en: @title } }] #possibly { term: {type: @type...
Try this instead of that script: { "query": { "filtered": { "filter": { "terms": { "Commodity": [ 55, 150 ], "execution": "and" } } } } } ...
When calling termsFilter, the method is expecting a var args invocation of Any*, so termsFilter("category", 1, 2) would work. But termsFilter("category", Array(1,2)) is treated as a single argument, since Array is a subclass of Any of course. By adding : _ * we force scala to see it as a...
java,scroll,elasticsearch,parallel-processing
After searching some more, I got the impression that this (same scrollId) is by design. After the timeout has expired (which is reset after each call Elasticsearch scan and scroll - add to new index). So you can only get one opened scroll per index. https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html states: Scrolling is not...
You need to import the namespace for the ElasticSearch object into your controller class. Typically this would be done with a use statement near the top of the file (under the namespace declaration for the class), i.e.: namespace MyBundle\Controller use Elasticsearch; class index { public function indexAction() { $client =...
elasticsearch,elastic,elasticsearch-net
You can use low level client to pass raw json. var elasticsearchClient = new Elasticsearch.Net.ElasticsearchClient(settings); var elasticsearchResponse = elasticsearchClient.Index("index", "type", "{\"UserID\":1,\"Username\": \"Test\",\"EmailID\": \"[email protected]\"}"); UPDATE Based on documentation, try this one: var sb = new StringBuilder(); sb.AppendLine("{ \"index\": { \"_index\": \"indexname\", \"_type\": \"type\" }}"); sb.AppendLine("{ \"UserID\":1, \"Username\": \"Test\", \"EmailID\": \"[email protected]\" }");...
ES 1.5.2 uses Lucene 4.10.4. Get the jars you need from here. Note that you might need not only lucene-core but others as well. Just browse the Maven repo and look for other jars as well, but all should be version 4.10.4.
You could use a terms aggregation with a scripted field: { query: { filtered: { filter: { regexp: { Url : ".*interestingpage.*" } } } }, size: 0, aggs: { myaggregation: { terms: { script: "doc['UserAgent'] =~ /.*android.*/ || doc['UserAgent'] =~ /.*ipad.*/ || doc['UserAgent'] =~ /.*iphone.*/ || doc['UserAgent'] =~ /.*mobile.*/...
Instead of using "\t" as the seperator, input an actual tab. like this: filter { csv { separator => " " } } ...
Use a should instead of a must: { "fields": [ "host" ], "filter": { "bool": { "should": [ { "regexp": { "_uid": { "value": ".*000ANT.*" } } }, { "regexp": { "_uid": { "value": ".*0BBNTA.*" } } } ] } } } ...
The params keyword in C# indicates that the method takes a variable number of parameters. For example, a method with this signature: public void DoStuff(params string[] values) { ... } Could be called like this: DoStuff(); DoStuff("value1"); DoStuff("value1", "value2", "value3", "value4", "value5"); //etc. So in your case, the array is...
This example is working, maybe it will put some light on your issue. var indicesResponse = client.DeleteIndex(descriptor => descriptor.Index(indexName)); client.CreateIndex(indexName, c => c .AddMapping<Exhibitor>(m => m .MapFromAttributes() .Properties(o => o .MultiField(mf => mf .Name(x => x.CompanyName) .Fields(fs => fs .String(s => s.Name(t => t.CompanyName).Index(FieldIndexOption.Analyzed).Analyzer("standard")) .String(s => s.Name(t =>...
elasticsearch,docker,mesos,marathon
Elasticsearch and NFS are not the best of pals ;-). You don't want to run your cluster on NFS, it's much too slow and Elasticsearch works better when the speed of the storage is better. If you introduce the network in this equation you'll get into trouble. I have no...
linux,ubuntu,elasticsearch,pid
Until elasticsearch fixes it , one possible workaround is to adapt the file /etc/init.d/elasticsearch change PID_DIR="/var/run/elasticsearch" to PID_DIR="/var/run" the PID file will now be created directly in the run folder. https://github.com/elastic/elasticsearch/issues/11594...
java,elasticsearch,elasticsearch-plugin
When indexing documents in this form, Elasticsearch will not be able to parse those strings as dates correctly. In case you transformed those strings to correctly formatted timestamps, the only way you could perform the query you propose is to index those documents in this format { "start": "2010-09", "end":...
Term query wont analyze your search text. This means you need to analyzed and provide the query in token format for term query to actually work. Use match query instead , things will work like magic. So when a string like below goes to Elasticsearch , its tokenized ( or...
You can use the following pattern for the same %{GREEDYDATA}\[name:%{WORD:name};%{GREEDYDATA},acc:%{NOTSPACE:account}\] GREEDYDATA us defined as follows - GREEDYDATA .* The key lie in understanding greedydata macro. It eats up all possible characters as possible....
You can achieve this using the info command: Example: from elasticsearch import Elasticsearch es = Elasticsearch() es.info() ...
java,date,elasticsearch,numberformatexception,spring-data-elasticsearch
Since the exception complains about a NumberFormatException, you should try sending the date as a long (instead of a Date object) since this is how dates are stored internally. See I'm calling date.getTime() in the code below: SearchQuery searchQuery = new NativeSearchQueryBuilder() .withQuery(matchAllQuery()) .withFilter(rangeFilter("publishDate").lt(date.getTime())).build(); ...
I don't think you can do this with terms. Try with another aggregation: { "aggs": { "myAggrs": { "terms": { "field": "system" } }, "missing_system": { "missing": { "field": "system" } } } } And the result will be: "aggregations": { "myAggrs": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ {...
json,elasticsearch,couchdb,elasticsearch-river
Since you're using the elasticsearch-river-couchdb plugin, you can configure the river with a groovy script that will remove all the fields but the ones you specify. An example is given in the official documentation of the plugin and simply amounts to add the following the script to the couchdb object:...
You need a query_string not a match: { "query": { "query_string": { "query": "PayerAccountId:\"156023466485\" AND UsageStartDate:[2015-01-01 TO 2015-10-01]" } } } ...
Your shoulds should be actually using a filter. For yours you have "_expires": null. This is not a filter. For example, try: { "missing": { "field": "_expires" } }, { "range": { "_expires": { "gte": 1433947304884 } } } ...
You can combine both of these aggregations into 1 request. { "aggs" : { "uniques" : { "cardinality" : { "field" : "guid" } }, "uniquesTerms": { "terms": { "field": "guid" }, "aggs": { "jobs": { "top_hits": { "_source": "title", "size": 1 }} } } } ...
Since you are using multiple indexes, one for every day, you can get the same _id. What makes a document unique is the uid, which is a combination of index,type and id. There is no way in elastic to change this to my knowledge.
You need to use a multi-field (https://www.elastic.co/guide/en/elasticsearch/reference/current/_multi_fields.html) to achieve what you want. For example: "title": { "type": "string", "fields": { "raw": { "type": "string", "index": "not_analyzed" } } } You'd then use title:whatever to search and title.raw in the kibana panel to get the correct legend behavior....
json,python-2.7,elasticsearch,google-search-api
here is a possible answer to your problem. def myfunk( inHole, outHole): for keys in inHole.keys(): is_list = isinstance(inHole[keys],list); is_dict = isinstance(inHole[keys],dict); if is_list: element = inHole[keys]; new_element = {keys:element}; outHole.append(new_element); if is_dict: element = inHole[keys].keys(); new_element = {keys:element}; outHole.append(new_element); myfunk(inHole[keys], outHole); if not(is_list or is_dict): new_element = {keys:inHole[keys]}; outHole.append(new_element);...
Solr heap size must be set according your business. Set -Xms=2G and -Xmx=12G is just a recommendation to lots of popular Solr applications but it's not mandatory. You need to assess your requirements and set the heap to work well for you. I really recommend you to use at least...
One way to achieve this using groovy is as below i.e you can use the max method of list on values. Example : { "query": { "function_score": { "functions": [ { "script_score": { "script": "max_score=doc[\"foo.bar\"].values.max();if(max_score >= input) {return (max_score - input);} else { return (max_score - input) *2;}", "lang": "groovy",...
That error says you don't have enough memory (more specifically, memory for fielddata) to store all the values from hash, so you need to take them out from the heap and put them on disk, meaning using doc_values. Since you are already using doc_values for my_prop I suggest doing the...
node.js,mongodb,elasticsearch,aggregation
Your category field should be analyzed with a custom analyzer. Maybe you have some other plans with the category, so I'll just add a subfield used only for aggregations: { "settings": { "analysis": { "filter": { "category_trimming": { "type": "pattern_capture", "preserve_original": false, "patterns": [ "(^\\w+\/\\w+)" ] } }, "analyzer": {...
You can achieve this by using scripting. Try the script below: { "query": { "function_score": { "functions": [ { "script_score": { "script": "_score * doc['bodyWeight'].value / doc['height'].value" } } ], "score_mode": "sum", "boost_mode": "replace" } } } Like wise you can compute score using field data. For more reference in...
I don't know how to do this in Python, but from Elasticsearch point of view, this is how the request looks like: GET /_all/_search?search_type=count { "aggs": { "NAME": { "terms": { "field": "_type", "size": 100 } } } } ...
"action.auto_create_index" is a bit complex beyond the true/false values. We can use patterns occuring in the index names to be identified and can specify whether it can be created automatically if it is not already existing. An example would be action.auto_create_index: -b*,+a*,-* Here the index starting with "a" will be...
You can use the delete by query API to achieve that. For instance, the following command will delete all documents of type nbu_job in the index your_index: curl -XDELETE 'http://localhost:9200/your_index/_query?q=_type:nbu_job' If you need to verify what is going to be deleted with the above command, I suggest you run the...
It might be as simple as removing the newline character you have between the first and the second line: From this: curl -XPUT host/my_index/_mapping/place --data '{ To this: curl -XPUT host/my_index/_mapping/place --data '{ I copy/pasted your exact command above and it failed for me. Then I removed the newline and...
You just need to specify the full path to the nested field: { "query": { "nested": { "path": "properties", "query": { "term": { "properties.FieldA.raw": "one" } } } } } Here is some code I used to test it: http://sense.qbox.io/gist/f2d9e5eae7496ca0fce8d2d23e17bf4d72d9300a...
Try this { "filter": { "bool": { "must": [ { "regexp": { "_uid": { "value": ".*000ANT.*" } } } ] } } } ...
AND/OR Logic can be applied as a Filter. Filters in elasticsearch will evaluate before a query is executed, so if you need to apply this logic to a call that also contains a string query, it will still be efficient and applicable. As your OR code is evaluating the same...
You can try to increase the fielddata circuit breaker limit to 75% (default is 60%) in your elasticsearch.yml config file and restart your cluster: indices.breaker.fielddata.limit: 75% Or if you prefer to not restart your cluster you can change the setting dynamically using: curl -XPUT localhost:9200/_cluster/settings -d '{ "persistent" : {...