Menu
  • HOME
  • TAGS

Scripted Fields for if/else condition in Kibana 4

Tag: elasticsearch,lucene,kibana,kibana-4

I have some numeric fields in elasticsearch, I have to implement some logic for which I need to create some scripted fields. I am new to kibana 4's scripted fields feature, so I need some help regarding a basic format that could be used for writing a basic if else condition in scripted fields. Detailed explanation: I have a number field x in elasticsearch, I need to create two scripted fields f1 and f2 such that,

if x==0
f1 = 1 and f2 = 0
else
f1 = 0 and f2 = 1

Just need the correct syntax to do this in Kibana 4's scripted fields feature. Also tell if this can't be done.
For more information on scripted fields refer : https://www.elastic.co/guide/en/kibana/current/settings.html

Best How To :

To create a scripted field, you go into the Settings for the index and click on the Scripted Fields tab. Hit Add Scripted Field.

In your case, you will enter f1 as the Name and doc['x'].value == 0 ? 1 : 0 as the Script. You'll then add a second scripted field with f2 as the Name and doc['x'].value != 0 ? 1 : 0 as the script.

The ?: is the ternary operator and works like in most languages -- it evaluates the condition before the ? and if the value is true the expression has the value of whatever is after the ? and if it's false, it has the value of whatever is after the :.

Fuzzy search not working with dismax query parser

solr,lucene

DisMax, by design, does not support all lucene query syntax in it's query parameter. From the documentation: This query parser supports an extremely simplified subset of the Lucene QueryParser syntax. Quotes can be used to group phrases, and +/- can be used to denote mandatory and optional clauses ... but...

How to compute the scores based on field data in elasticsearch

elasticsearch

You can achieve this by using scripting. Try the script below: { "query": { "function_score": { "functions": [ { "script_score": { "script": "_score * doc['bodyWeight'].value / doc['height'].value" } } ], "score_mode": "sum", "boost_mode": "replace" } } } Like wise you can compute score using field data. For more reference in...

ElasticSearch- “No query registered for…”

search,indexing,elasticsearch

Write your query like this, i.e. sort needs to go to the top-level and not nested in the querypart: { "query": { "filtered": { "query": { "query_string": { "fields": [ "description" ], "query": "sun" } }, "filter": { "term": { "permissions": 1 } } } }, "sort": [ { "likes_count":...

indexing names in json using elasticsearch in couchdb

json,elasticsearch,couchdb,elasticsearch-river

Since you're using the elasticsearch-river-couchdb plugin, you can configure the river with a groovy script that will remove all the fields but the ones you specify. An example is given in the official documentation of the plugin and simply amounts to add the following the script to the couchdb object:...

elastic search sort in aggs by column

sorting,elasticsearch,group-by,order

Edit to reflect clarification in comments: To sort an aggregation by string value use an intrinsic sort, however sorting on non numeric metric aggregations is not currently supported. "aggs" : { "order_by_title" : { "terms" : { "field" : "title", "order": { "_term" : "asc" } } } } ...

NEST ElasticSearch.NET Escape Special Characters

c#,elasticsearch,nest

You will need to setup a multifield as the dash is causing the terms to be split. I have found an answer to a similar question which answers yours: http://stackoverflow.com/a/28859145/4134821

Query returns both documents instead of just one

c#,.net,elasticsearch,nest

term filters don't analyze the text to be searched. Meaning, if you search for 000A8D810F5A, this is exactly what is searching for (upper-case letters included). But, your macAddr and insturmentName fields and others are just strings. Meaning, they use the standard analyzer which lowercases the terms. So, you are searching...

Elasticsearch aggregations over regex matching in a list

regex,elasticsearch

You can achieve that with a simple terms aggregation parametrized with an include property which you can use to specify either a regexp (e.g. alt.* in your case) or an array of values to be included in the buckets. Note that there is also the exclude counterpart, if needed: {...

get buckets count in elasticsearch aggregations

elasticsearch,elastic

You can combine both of these aggregations into 1 request. { "aggs" : { "uniques" : { "cardinality" : { "field" : "guid" } }, "uniquesTerms": { "terms": { "field": "guid" }, "aggs": { "jobs": { "top_hits": { "_source": "title", "size": 1 }} } } } ...

Recycling app pool each time something is published

.net,iis,lucene,umbraco,application-pool

There's an issue with frequent app pool recycles when you update files in App_Data frequently (which Umbraco does). A MS HotFix was posted for it this morning: see MS download here. It sounds like this might be the issue that you've been having.

Docker container http requests limit

http,elasticsearch,docker

So, it wasn't a problem with either Docker or Elastic. Just to recap, the same script throwning PUT requests at a Elasticsearch setup locally worked, but when throwning at a container with Elasticsearch failed after a few thousand documents (20k). To note that the overal number of documents was roughtly...

ElasticSearch: How to search on different fields that are not related that are arrays of objects

elasticsearch

Nesting [bool][must][bool][should] isolating "minimum_should_match" to only the list (array of objects) being searched on. See the below example. { "query" : { "bool" : { "must" : [ { "multi_match":{ "query":"disease resistant", "type":"cross_fields", "fields":[ "description", "planting", "maintenance", "name" ], "tie_breaker":0.3 } }, "bool" : { "should" : [ {"match" :...

Creating Index in Elasticsearch using Java API giving NoClassFoundException

java,indexing,elasticsearch

ES 1.5.2 uses Lucene 4.10.4. Get the jars you need from here. Note that you might need not only lucene-core but others as well. Just browse the Maven repo and look for other jars as well, but all should be version 4.10.4.

Parsing Google Custom Search API for Elasticsearch Documents

json,python-2.7,elasticsearch,google-search-api

here is a possible answer to your problem. def myfunk( inHole, outHole): for keys in inHole.keys(): is_list = isinstance(inHole[keys],list); is_dict = isinstance(inHole[keys],dict); if is_list: element = inHole[keys]; new_element = {keys:element}; outHole.append(new_element); if is_dict: element = inHole[keys].keys(); new_element = {keys:element}; outHole.append(new_element); myfunk(inHole[keys], outHole); if not(is_list or is_dict): new_element = {keys:inHole[keys]}; outHole.append(new_element);...

Elasticsearch boost per field with function score

elasticsearch,lucene,solr-boost

The _boost field (document-level boost) was removed, but field-level boosts, and query boosts still work just fine. It looks like you are looking for query boosts. So, if you wanted to boost matches on field1: "bool": { "should": [{ "terms": { "field1": ["67", "93", "73", "78", "88", "77"], "boost": 2.0...

NEST - Using GET instead of POST/PUT for searching

c#,elasticsearch,nest

Yes https://github.com/elastic/elasticsearch-net/blob/develop/src/Nest/DSL/SearchDescriptor.cs line number 135 public static void Update(IConnectionSettingsValues settings, ElasticsearchPathInfo<SearchRequestParameters> pathInfo, ISearchRequest request) { pathInfo.HttpMethod = request.RequestParameters.ContainsKey("source") ? PathInfoHttpMethod.GET : PathInfoHttpMethod.POST; } Obviously you need to have SearchRequest.RequestParameters.ContainsKey("source") return true for it to do a Get. In future. Just RTFM....

How to have multiple regex based on or condition in elasticsearch?

elasticsearch

Use a should instead of a must: { "fields": [ "host" ], "filter": { "bool": { "should": [ { "regexp": { "_uid": { "value": ".*000ANT.*" } } }, { "regexp": { "_uid": { "value": ".*0BBNTA.*" } } } ] } } } ...

How to use arrays in lambda expressions?

c#,elasticsearch,nest

The params keyword in C# indicates that the method takes a variable number of parameters. For example, a method with this signature: public void DoStuff(params string[] values) { ... } Could be called like this: DoStuff(); DoStuff("value1"); DoStuff("value1", "value2", "value3", "value4", "value5"); //etc. So in your case, the array is...

ElasticSearch (Nest) Terms sub aggregation of Terms - Not working as intended

elasticsearch,nest

Ok, then what you need is simply to declare your trainings property with type: nested in your mapping, like this: { "mappings": { "person": { "properties": { "id": { "type": "string" }, "name": { "type": "string" }, "trainings": { "type": "nested", <----- add "nested" here "properties": { "attendanceDate": { "type":...

How to define a bucket aggregation where buckets are defined by arbitrary filters on a field (GROUP BY CASE equivalent)

elasticsearch

You could use a terms aggregation with a scripted field: { query: { filtered: { filter: { regexp: { Url : ".*interestingpage.*" } } } }, size: 0, aggs: { myaggregation: { terms: { script: "doc['UserAgent'] =~ /.*android.*/ || doc['UserAgent'] =~ /.*ipad.*/ || doc['UserAgent'] =~ /.*iphone.*/ || doc['UserAgent'] =~ /.*mobile.*/...

Operator '??' cannot be applied to operands of type IQueryContainer and lambda expression

c#,elasticsearch,nest

Thanks to the comment of @Mrinal Kamboj and the answer of @Wormbo, I found my own answer: I changed the argument type to QueryContainer and if the argument is null, a new QueryMatchAll query is created, this works for me: public void ProcessQuery(QueryContainer query = null) { var searchResult =...

How to write search queries in kibana using Query DSL for Elasticsearch aggregation

elasticsearch,querydsl,kibana-4

I am not sure you can do this as the Discovery section already uses the timestamp aggregation. Can you explain what are you trying to do? There are ways to add customer aggregations in the visualizations. If you open up the advanced section on the aggregation in the visualization you...

ElasticSearch - Configuration to Analyse a document on Indexing

elasticsearch

Elasticsearch is "near real-time" by nature, i.e. all indices are refreshed every second (by default). While it may seem enough in a majority of cases, it might not, such as in your case. If you need your documents to be available immediately, you need to refresh your indices explicitly by...

logstash tab separator not escaping

elasticsearch,logstash

Instead of using "\t" as the seperator, input an actual tab. like this: filter { csv { separator => " " } } ...

Elasticsearch: How to query using partial phrases in quotation marks

elasticsearch

query_string will do just this. { "query": { "query_string": { "query": "example \"hello world\" elasticsearch", "default_operator": "AND" } } } ...

Re-index object with new fields

elasticsearch,nest

if you want to update you can do this: var response = _Instance.Update<BusinessElastic, object>(u => u .Index(elasticSearchIndex) .Type("business") .Id(result.Hits.FirstOrDefault().Id) .Doc(new Document {Column1="", Column2=""}) .DocAsUpsert() ); ...

Javascript: Altering an object where dot notation is used [duplicate]

javascript,jquery,elasticsearch

This is because the binding you've used (i.e., 'terms.region.slug') is incompatible with dot-notation. Use of dot-notation requires that the binding be parseable as an identifier. However, bracket notation is equivalent to dot-notation and allows any binding to be used. console.log( filter.and[0].term['terms.region.slug'] );...

Understanding Apache Lucene's scoring algorithm

search,solr,lucene,full-text-search,hibernate-search

Scoring calculation is something really complex. Here, you have to begin with the primal equation: score(q,d) = coord(q,d) · queryNorm(q) · ∑ ( tf(t in d) · idf(t)2 · t.getBoost() · norm(t,d) ) As you said, you have tf which means term frequency and its value is the squareroot of...

Get document on some condition in elastic search java API

java,elasticsearch,elasticsearch-plugin

When indexing documents in this form, Elasticsearch will not be able to parse those strings as dates correctly. In case you transformed those strings to correctly formatted timestamps, the only way you could perform the query you propose is to index those documents in this format { "start": "2010-09", "end":...

How to use all the cores of Solr in solrj

java,indexing,solr,lucene,solrj

here in your code it still points to core1. HttpSolrClient solrClient = new HttpSolrClient("http://localhost:8983/solr/core1" In case you want to have the indexex for core2 you need to change here HttpSolrClient solrClient = new HttpSolrClient("http://localhost:8983/solr/core2" after this change try run the job, it will index for the core2....

ElasticSearch Multiple Scrolls Java API

java,scroll,elasticsearch,parallel-processing

After searching some more, I got the impression that this (same scrollId) is by design. After the timeout has expired (which is reset after each call Elasticsearch scan and scroll - add to new index). So you can only get one opened scroll per index. https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html states: Scrolling is not...

Elasticsearch - Order search results ASC

c#,sorting,elasticsearch,nest

This example is working, maybe it will put some light on your issue. var indicesResponse = client.DeleteIndex(descriptor => descriptor.Index(indexName)); client.CreateIndex(indexName, c => c .AddMapping<Exhibitor>(m => m .MapFromAttributes() .Properties(o => o .MultiField(mf => mf .Name(x => x.CompanyName) .Fields(fs => fs .String(s => s.Name(t => t.CompanyName).Index(FieldIndexOption.Analyzed).Analyzer("standard")) .String(s => s.Name(t =>...

Elasticsearch NumberFormatException when running two consecutive java tests

java,date,elasticsearch,numberformatexception,spring-data-elasticsearch

Since the exception complains about a NumberFormatException, you should try sending the date as a long (instead of a Date object) since this is how dates are stored internally. See I'm calling date.getTime() in the code below: SearchQuery searchQuery = new NativeSearchQueryBuilder() .withQuery(matchAllQuery()) .withFilter(rangeFilter("publishDate").lt(date.getTime())).build(); ...

elasticsearch aggregation group by null key

elasticsearch

I don't think you can do this with terms. Try with another aggregation: { "aggs": { "myAggrs": { "terms": { "field": "system" } }, "missing_system": { "missing": { "field": "system" } } } } And the result will be: "aggregations": { "myAggrs": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ {...

Not able to access Kibana running in a Docker container on port 5601

elasticsearch,docker,dockerfile,kibana-4

The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself. When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles. Since your CMD only starts gunicorn,...

Elasticsearch geospatial search, problems with index setup

elasticsearch,geospatial

There are a couple issues in your code: Issue 1: When you create your document in the second snippet, you're not using the correct mapping type and your body doesn't include the correct field name as declared in your mapping: client.create({ index: 'events', type: 'geo_point', <-------- wrong type body: {...

Searching a TextField and IntField together seperated by an AND condition In Lucene

java,search,indexing,lucene

There are a few possibilities. First, what I would most recommend in this case, is perhaps you don't need those IntFields at all. Zip codes and IDs are usually not treated as numbers. They are identifiers that happen to be comprised of digits. If your zip code is 23456, that...

Elasticsearch standard analyser stopwords

elasticsearch

This would be the list of stopwords for the standard analyzer: http://grepcode.com/file/repo1.maven.org/maven2/org.apache.lucene/lucene-analyzers-common/4.9.0/org/apache/lucene/analysis/core/StopAnalyzer.java?av=f#50 50 static { 51 final List<String> stopWords = Arrays.asList( 52 "a", "an", "and", "are", "as", "at", "be", "but", "by", 53 "for", "if", "in", "into", "is", "it", 54 "no", "not", "of", "on", "or", "such", 55 "that", "the", "their", "then",...

Strange behaviour of limit in Elasticsearch

python,elasticsearch

The limit filter doesn't limit the number of documents that are returned, just the number of documents that the query executes on each shard. If you want to limit the number of documents returned, you need to use the size parameter or in your case the Python slicing API like...

ElasticSearch - how to get the auto generated id from an insert query

c#,mysql,database,elasticsearch,nest

You can find the id values from the ISearchResponse (based on your code example above) by looking at the objects in the Hits collection, rather than the Documents collection. Each Hit has an Id property. In the original indexing call (assuming you're doing that individually -- not via the _bulk...

How to get duplicate field values in elastic search by field name without knowing its value

elasticsearch

You can use Terms Aggregation for this. POST <index>/<type>/_search?search_type=count { "aggs": { "duplicateNames": { "terms": { "field": "EmployeeName", "size": 0, "min_doc_count": 2 } } } } This will return all values of the field EmployeeName which occur in at least 2 documents....

How to read data in logs using logstash?

elasticsearch,logstash

You can use the following pattern for the same %{GREEDYDATA}\[name:%{WORD:name};%{GREEDYDATA},acc:%{NOTSPACE:account}\] GREEDYDATA us defined as follows - GREEDYDATA .* The key lie in understanding greedydata macro. It eats up all possible characters as possible....

Get elasticsearch result based on two keys

elasticsearch,elastic

You need a query_string not a match: { "query": { "query_string": { "query": "PayerAccountId:\"156023466485\" AND UsageStartDate:[2015-01-01 TO 2015-10-01]" } } } ...

ElasticSearch REST - insert JSON string without using class

elasticsearch,elastic,elasticsearch-net

You can use low level client to pass raw json. var elasticsearchClient = new Elasticsearch.Net.ElasticsearchClient(settings); var elasticsearchResponse = elasticsearchClient.Index("index", "type", "{\"UserID\":1,\"Username\": \"Test\",\"EmailID\": \"[email protected]\"}"); UPDATE Based on documentation, try this one: var sb = new StringBuilder(); sb.AppendLine("{ \"index\": { \"_index\": \"indexname\", \"_type\": \"type\" }}"); sb.AppendLine("{ \"UserID\":1, \"Username\": \"Test\", \"EmailID\": \"[email protected]\" }");...

Elasticsearch - Query document missing an array value

elasticsearch

Try this (with lowercase value for the terms bool): { "query": { "bool": { "must": [ { "match_all": {} } ], "must_not": { "terms": { "interdictions": [ "s2v" ] } } } }, "from": 0, "size": 10 } Most probably, you have an analyzer (maybe the standard default one) that...

ElasticSearch asynchronous post

database,post,asynchronous,elasticsearch,get

To ensure data is available, you can make a refresh request to corresponding index before GET/SEARCH: http://localhost:9200/your_index/_refresh Or refresh all indexes: http://localhost:9200/_refresh ...

Elasticsearch and C# - query to find exact matches over strings

c#,.net,database,elasticsearch,nest

You can use filtered query with term filter: { "filtered": { "query": { "match_all": { } }, "filter": { "bool" : { "must" : [ {"term" : { "macaddress" : "your_mac" } }, {"term" : { "another_field" : 123 } } ] } } } } NEST version (replace dynamic...

MultiMatch query with Nest and Field Suffix

c#,elasticsearch,nest

Did you try to use extension method Suffix? This is how you can modify your query: ... .OnFields( f => f.ScreenData.First().ValueString, f => f.ScreenData.First().ValueString.Suffix("english")) .Type(TextQueryType.BestFields) ... Hope it helps....

Bad scoring due to different maxDocs of IDF

elasticsearch

The default search type is query_then_fetch . Both query_then_fetch and query_and_fetch involve calculating the term and document frequency local to each of the shards in the index. However if you want a more accurate calculation of term/document frequency one can use dfs_query_then_fetch/dfs_query_and_fetch .Here the frequency is calculated across all the...