I'm using the modules RedisJSON and RediSearch together to perform search queries on JSON data.
For every JSON object, I need to index all the string elements in an array field to be able to get this object by querying one of the strings in the array (i.e. get some book data by searching for one of the authors contained in a string array in the book JSON).
However, it seems to be currently not possible.
Is there any possible workaround or am I stuck?
You need to try the latest version of RediSearch+RedisJSON.
The example from the github issue you are referring to is working for me with RedisJSON 2.0.5 and RediSearch 2.2.5
127.0.0.1:6379> ft.create index on json schema $.names[0:].first as first tag
OK
127.0.0.1:6379> json.set pserson:1 $ '{"names":[{"first": "fname1","last": "lname1"},{"first": "fname2","last": "lname2"},{"first": "fname3", "last": "lname3"}]}'
OK
127.0.0.1:6379> ft.search index #first:{fname1}
1) (integer) 1
2) "pserson:1"
3) 1) "$"
2) "{\"names\":[{\"first\":\"fname1\",\"last\":\"lname1\"},{\"first\":\"fname2\",\"last\":\"lname2\"},{\"first\":\"fname3\",\"last\":\"lname3\"}]}"
127.0.0.1:6379> ft.search index #first:{fname2}
1) (integer) 1
2) "pserson:1"
3) 1) "$"
2) "{\"names\":[{\"first\":\"fname1\",\"last\":\"lname1\"},{\"first\":\"fname2\",\"last\":\"lname2\"},{\"first\":\"fname3\",\"last\":\"lname3\"}]}"
Here is the specific redis-cli info modules:
module:name=ReJSON,ver=20005,api=1,filters=0,usedby=[search],using=[],options=[handle-io-errors]
module:name=search,ver=20205,api=1,filters=0,usedby=[],using=[ReJSON],options=[handle-io-errors]
Related
I have extracted this variable (userID_ALL) that contains the key-value pairs of some users. I would like to make a foreach loop that will call an API that will use the ID of each user. Is there a way to access the ID of the user in the foreach loop from the varible?
extracted key-value pairs from Json Extractor
I believe there should be a better way to get the IDs from the response, however if you have to deal with the variables as per your screenshot - you could fetch the "id" attribute value using the following __groovy() function:
${__groovy(new groovy.json.JsonSlurper().parseText(vars.get('userID')).id,)}
Demo:
More information:
JsonSlurper Documentation
Apache Groovy - Parsing and producing JSON
Apache Groovy - Why and How You Should Use It
I have a Postgres table posts with a column of type jsonb which is basically a flat array of tags.
What i need to do is to somehow run a LIKE query on that tags column elements so that i can find a posts which has a tags beginning with some partial string.
Is such thing possible in Postgres? I'm constantly finding super complex examples and no one is ever describing such basic and simple scenario.
My current code works fine for checking if there are posts having specific tags:
select * from posts where tags #> '"TAG"'
and I'm looking for a way of running something among the lines of
select * from posts where tags #> '"%TAG%"'
SELECT *
FROM posts p
WHERE EXISTS (
SELECT FROM jsonb_array_elements_text(p.tags) tag
WHERE tag LIKE '%TAG%'
);
Related, with explanation:
Search a JSON array for an object containing a value matching a pattern
Or simpler with the #? operator since Postgres 12 implemented SQL/JSON:
SELECT *
-- optional to show the matching item:
-- , jsonb_path_query_first(tags, '$[*] ? (# like_regex "^ tag" flag "i")')
FROM posts
WHERE tags #? '$[*] ? (# like_regex "TAG")';
The operator #? is just a wrapper around the function jsonb_path_exists(). So this is equivalent:
...
WHERE jsonb_path_exists(tags, '$[*] ? (# like_regex "TAG")');
Neither has index support. (May be added for the #? operator later, but not there in pg 13, yet). So those queries are slow for big tables. A normalized design, like Laurenz already suggested would be superior - with a trigram index:
PostgreSQL LIKE query performance variations
For just prefix matching (LIKE 'TAG%', no leading wildcard), you could make it work with a full text index:
CREATE INDEX posts_tags_fts_gin_idx ON posts USING GIN (to_tsvector('simple', tags));
And a matching query:
SELECT *
FROM posts p
WHERE to_tsvector('simple', tags) ## 'TAG:*'::tsquery
Or use the english dictionary instead of simple (or whatever fits your case) if you want stemming for natural English language.
to_tsvector(json(b)) requires Postgres 10 or later.
Related:
Get partial match from GIN indexed TSVECTOR column
Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL
I'm trying to figure out what the most appropriate way to retrieve content (either a hash or string) from a set of ids.
The redis documentation talks about a tagging system here in which sets are used to filter down books, but does not mention how you would then get information about a book. You could obviously use mget() with a list of ids once you've filtered down the the ids, but this only really works if you're working with String values and not hashes. It also means that you need to return the ids back to your application code and convert "id" to "book:id". Is there a better way to do this?
Well there are many approaches you can take, Use MGET as suggested by you. What I do Generally use SORT function ... Assume we have 3 hashes like below and 1 set containing ids of those SETS
HMSET 1 fname a lname b
HMSET 2 fname c lname d
HMSET 3 fname e lname f
SADD fetch_from_set 1
SADD fetch_from_set 2
SADD fetch_from_set 3
SORT fetch_from_set BY NOSORT GET *->fname GET *->lname
1) "a"
2) "b"
3) "c"
4) "d"
5) "e"
6) "f"
So by using this you will get the values of fname and lname. As you are using NOSORT which which will not sort the SET it should not hamper performance much.
Also, with redis 2.8 you have scan command. I have not used it but you may want to look at it.
My structure
cat:id:name -> name of category
cat:id:subcats -> set of subcategories
cat:list -> list of category ids
The following gives me a list of cat ids:
lrange cat:list 0, -1
Do I have to iterate each id from the above command to get the name field in my script? Because that seems inefficient. How can I get a list of category names from redis?
There are a couple different approaches. You may want to have the values in the list be delimited/encoded strings that contain both the id, the name, and any other value you need quick access to. I recommend JSON for interoperability and efficient string length, but there are other formats which are more performant.
Another option is to, like you said, iterate. You can make this more efficient by getting all your keys in a single request and then using MGET, pipelining, or MULTI/EXEC to fetch all the names in a single, efficient, operation.
I was hoping to implement an easy, but effective text search for App Engine that I could use until official text search capabilities for app engine are released. I see there are libraries out there, but its always a hassle to install something new. I'm wondering if this is a valid strategy:
1) Break each property that needs to be text-searchable into a set(list) of text fragments
2) Save record with these lists added
3) When searching, just use equality filters on the list properties
For example, if I had a record:
{
firstName="Jon";
lastName="Doe";
}
I could save a property like this:
{
firstName="Jon";
lastName="Doe";
// not case sensative:
firstNameSearchable=["j","o", "n","jo","on","jon"];
lastNameSerachable=["D","o","e","do","oe","doe"];
}
Then to search, I could do this and expect it to return the above record:
//pseudo-code:
SELECT person
WHERE firstNameSearchable=="jo" AND
lastNameSearchable=="oe"
Is this how text searches are implemented? How do you keep the index from getting out of control, especially if you have a paragraph or something? Is there some other compression strategy that is usually used? I suppose if I just want something simple, this might work, but its nice to know the problems that I might run into.
Update:::
Ok, so it turns out this concept is probably legitimate. This blog post also refers to it: http://googleappengine.blogspot.com/2010/04/making-your-app-searchable-using-self.html
Note: the source code in the blog post above does not work with the current version of Lucene. I installed the older version (2.9.3) as a quick fix since google is supposed to come out with their own text search for app engine soon enough anyway.
The solution suggested in the response below is a nice quick fix, but due to big table's limitations, only works if you are querying on one field because you can only use non-equality operators on one property in a query:
db.GqlQuery("SELECT * FROM MyModel WHERE prop >= :1 AND prop < :2", "abc", u"abc" + u"\ufffd")
If you want to query on more than one property, you can save indexes for each property. In my case, I'm using this for some auto-suggest functionality on small text fields, not actually searching for word and phrase matches in a document (you can use the blog post's implementation above for this). It turns out this is pretty simple and I don't really need a library for it. Also, I anticipate that if someone is searching for "Larry" they'll start by typing "La..." as opposed to starting in the middle of the word: "arry". So if the property is for a person's name or something similar, the index only has the substrings starting with the first letter, so the index for "Larry" would just be {"l", "la", "lar", "larr", "larry"}
I did something different for data like phone numbers, where you may want to search for one starting from the beginning or middle digits. In this case, I just stored the entire set of substrings starting with strings of length 3, so the phone number "123-456-7890" would be: {"123","234", "345", ..... "123456789", "234567890", "1234567890"}, a total of (10*((10+1)/2))-(10+9) = 41 indexes... actually what I did was a little more complex in order to remove some unlikely to-be-used substrings, but you get the idea.
Then your query would be:
(Pseaudo Code)
SELECT * from Person WHERE
firstNameSearchIndex == "lar"
phonenumberSearchIndex == "1234"
The way that app engine works is that if the query substrings match any of the substrings in the property, then that is counted as a match.
In practice, this won't scale. For a string of n characters, you need n factorial index entries. A 500 character string would need 1.2 * 10^1134 indexes to capture all possible substrings. You will die of old age before your entity finishes writing to the datastore.
Implementations like search.SearchableModel create one index entry per word, which is a bit more realistic. You can't search for arbitrary substrings, but there is a trick that lets you match prefixes:
From the docs:
db.GqlQuery("SELECT * FROM MyModel
WHERE prop >= :1 AND prop < :2",
"abc", u"abc" + u"\ufffd")
This matches every MyModel entity with
a string property prop that begins
with the characters abc. The unicode
string u"\ufffd" represents the
largest possible Unicode character.
When the property values are sorted in
an index, the values that fall in this
range are all of the values that begin
with the given prefix.