I am trying to search names in elastic search,
Consider name as kanal-kannan
normally we search name with * na, I tried to search like this -
"/index/party_details/_search?size=200&from=0&q=(first_name_v:kanal-*)"
this results in zero records.
I am trying to search names in elastic search,
Consider name as kanal-kannan
normally we search name with * na, I tried to search like this -
"/index/party_details/_search?size=200&from=0&q=(first_name_v:kanal-*)"
this results in zero records.
Unless the hyphen character has been dealt with specifically by the analyzer then the two words in your example, kanal and kannan will be indexed separately because any non-alpha character is treated by default as a word delimiter.
Have a look at the documentation for Word Delimiter Token Filter and specifically at the type_table parameter.
Here's an example I used to ensure that an email field was correctly indexed
ft.custom_delimiter = {
"type": "word_delimiter",
"split_on_numerics": false,
"type_table": ["@ => ALPHANUM", ". => ALPHANUM", "- => ALPHA", "_ => ALPHANUM"]
};
- is a special character that needs to be escaped to be searched literally: \-
If you use the q parameter (that is, the query_string query), the rules of the Lucene Queryparser Syntax apply.
Depending on your analyzer chain, you might not have any - characters in your index, replacing them with a space in your query would work in that cases too.
@l4rd's answer should work properly (I have the same setup). Another option you have is to mark field with keyword analyzer to prevent tokenizing at all. Note that keyword tokenizer wouldn't lowercase anything, so use custom analyzer with keyword tokenizer in this case.