ElasticSearch – How Does It Perform When Queried With 1,000 Conditions - database

I need to be able to search records that have any user ID within a group of user IDs in the query.
However, the amount of user IDs that must be searched will grow substantially over time. Therefore, I must be able to add thousands of user IDs to a single query and search across all of them.
I'm considering using ElasticSearch for this via a managed service like bonsai.
How well does ElasticSearch perform when queried with thousands of conditions?

The answer depends on lots of things (number of servers, RAM, CPU, etc), and it will probably take some experimentation to figure out what works best for you. I'm confident that Elasticsearch can solve your problem, but it's hard to predict performance in general.
You might want to investigate terms lookup. Basically you store all the terms for which you want to search in a document in the index (or another one), then you can reference that list in your search.
So you could save the IDs you want to search for as
PUT /test_index/idlist/1
{
"ids" : [2,1982,939,1982,98716,7611,983838,...]
}
Then you can search another type using that list with something like this, for example, with a top-level filter:
POST /test_index/doc/_search
{
"filter": {
"terms": {
"id": {
"index": "test_index",
"type": "idlist",
"id": "1",
"path": "ids"
}
}
}
}
This probably only makes sense if you're going to run the same query more than once. You could have more than one list of IDs, though, and give the documents holding lists descriptive IDs if it helps.
Using a managed service makes it easy to experiment with different cluster setups (number of nodes, size of machines, data center, and so on). I would suggest you take a look at Qbox (I'm biased, since I work with Qbox). New customers get a $40 introductory credit, which is usually enough to experiment with a proof of concept.

Related

How to store nested relational data in Solr

I'm trying to store data within Solr so that I can best maintain the indexes. The problem i'm having is that my data structure is heavily nested. Example:
Company
(to many) Person
(to many) Property
(to many) Network
(to many) SubNetwork
I'm trying to create a full text search index for each SubNetwork that will display the current parent fields along side it.
Currently my data is completely denormalised, e.g:
{
"company": "Coca-Cola",
"property": "1 plaza hotel",
"network": "ABC",
"subNetwork": "123"
}
Now if a user were to go into the application and change the name of the company, right now (in the denormalized state), that would require Solr to partially update (atomic update) many documents which doesn't feel very efficient. Re-indexing the index isn't a preferred solution as this is a multi tenanted application.
I have tried putting the relational data in separate indexes and then used join within Solr but this does not copy over the joined indexes fields in the final result which means a full text search on all the fields isn't possible.
{!join from=inner_id to=outer_id}field:value
I'm trying to configure Solr in a way that when a parent record is updated, it only requires one atomic update but still retains the ability to search on all fields. Is this possible?
Unless you are seeing the performance issues, your initial implementation seems correct. Especially if you are returning subnetwork and may be searching on subnetwork and parent values at the same time.
Doing atomic update, under the covers, actually re-indexes the document anyway (and creates a Lucene-level new document). It also requires all fields to be stored to allow recreating the document. And the join reduces the scoring flexibility you can have.
One optimization you could do is to NOT store the parent fields, but keep them index-only. This will be more space-efficient and less disk/record re-hydration work. But then you cannot return those fields to the user and would have to fetch them from the original source instead.

Limiting GAE Search API results by user

We have a use case where users must be able to search content that is only available in Groups that they have access to. The search must be across all groups that they have access to.
Some details:
A Group has many Posts, and a user may have access to hundreds of Groups and thousands of Posts within each Group.
A search for "Foo" should return all Groups with "Foo" in the name and all Posts, within the Groups that they have access to, and have "Foo" in the content.
The way I thought of dealing with it is to have a list of user_id's associated on each document index and then include the user_id in the query string to verify that the user has access. Once the results are returned we could do an additional check to see that they have access to the content before returning the results.
The document index is something like this:
fields = [
search.TextField(name="data", value="some searchable stuff"),
search.AtomField(name="post_id", value="id of post"),
search.AtomField(name="group_id", value="id of group"),
search.AtomField(name="user_id", value=user_id_1),
search.AtomField(name="user_id", value=user_id_2),
#.... add the thousand other users who have access to the group (done in loop)
]
#then query run a user 123 would be as follows:
results = index.search("data = Foo AND user_id = 123")
My concern with the above approach:
Every new user who subscribes to a group would require the search index to be reindexed to include their user_id on each document.
Is there a better way of handling this use case?
Thanks
Rob
There is no simple answer to your question. You need to plan for (a) a typical use-case, and (b) extreme cases.
If a typical user belongs to 1-3 groups, searching by group_id maybe the best solution. You will do 1-2 extra searches, but you won't need to re-index every document every time a user joins or exits a group, which is prohibitively expensive.
You can have a separate implementation for extreme cases. If a user belongs to more than X groups, it may be more efficient to retrieve all results matching the keyword, and then filter them by group_id.
An alternative approach is to always retrieve all results regardless of group_id/user_id, and store them in Memcache. Then you can filter them in memory.
Users tend to search using the same keywords - depending on your corpus, 1% of words may account for up to 99% of searches. If you have a lot of users - and a big enough cache - you will get a lot of cache hits. Note that 1GB of cache can fit tens or even hundreds of thousands of query results. An additional advantage of this approach is that it speeds up all queries, especially phrase or multi-keyword searches.

How to select which database to use for object storing

I want to deploy a small web project I have in mind, where the data I want to save are structs with nested structs inside, and most of the times the inner structs does not have the same fields and types.
for example, I'd like something like that, to be a "row" in a table
{
event : "The Oscars",
place : "Los Angeles, USA",
date : "March 2, 2014"
awards :
[
bestMovie :
[
name : "someName",
director : "someDirector",
actors :
[
... etc
]
],
bestActor : "someActor"
]
}
( JSON objects are easy to use for me at the moment, and passing it between server and client side. The client-side is run on JavaScript )
I started with MySQL/PHP but very soon I saw that it doesn't suit me. I tried mongoDB for a few days but I don't know how exactly to refine my search on which is the best db to use.
I want to be able to set some object models/schemas, and select exactly which part to update and which fields are unique in each struct.
Any suggestions? Thanks.
This is not an answerable question so will likely be closed.
There is no one right answer here and there are a few questions to ask such as data structure, speed requirements, and the old CAP Theorem questions of what do you need:
Consistency
Availability
Partition-ability
I would suggest mongo will be a great place to start if you are casually working away and don't anticipate having to deal with any of the issues above at scale. Couch is another similar option but doesn't have the same community size.
I say mongo because your data is denormalized into a document and mongo is good at serving documents. It also speaks json!
RDBMS databases would require you to denormalize your documents and create relationships which is quite a bit of work from where you are relative to sticking the documents into a document.
You could serialize the data using protocol buffers and put that in an rdbms but this is not advisable.
For blazing speed you could use redis which has constant time lookups in memory. But this is better suited (in most cases) for ephemeral data like user sessions - not long term persistant storage.
Finally there are graph databases like neo4j which are document-like databases which store relationships between nodes with typed edges. This suits social and recommendation problems quite well but that's probably not the problem you're trying to solve - in the question it simply states what is best for your data for storage.
Looking at some of the possibilities, I think you'll probably find mongo best suits your needs as you already have json document structures and only need simple persistance for those documents.

Elastic search, multiple indexes vs one index and types for different data sets?

I have an application developed using the MVC pattern and I would like to index now multiple models of it, this means each model has a different data structure.
Is it better to use mutliple indexes, one for each model or have a type within the same index for each model? Both ways would also require a different search query I think. I just started on this.
Are there differences performancewise between both concepts if the data set is small or huge?
I would test the 2nd question myself if somebody could recommend me some good sample data for that purpose.
There are different implications to both approaches.
Assuming you are using Elasticsearch's default settings, having 1 index for each model will significantly increase the number of your shards as 1 index will use 5 shards, 5 data models will use 25 shards; while having 5 object types in 1 index is still going to use 5 shards.
Implications for having each data model as index:
Efficient and fast to search within index, as amount of data should be smaller in each shard since it is distributed to different indices.
Searching a combination of data models from 2 or more indices is going to generate overhead, because the query will have to be sent to more shards across indices, compiled and sent back to the user.
Not recommended if your data set is small since you will incur more storage with each additional shard being created and the performance gain is marginal.
Recommended if your data set is big and your queries are taking a long time to process, since dedicated shards are storing your specific data and it will be easier for Elasticsearch to process.
Implications for having each data model as an object type within an index:
More data will be stored within the 5 shards of an index, which means there is lesser overhead issues when you query across different data models but your shard size will be significantly bigger.
More data within the shards is going to take a longer time for Elasticsearch to search through since there are more documents to filter.
Not recommended if you know you are going through 1 terabytes of data and you are not distributing your data across different indices or multiple shards in your Elasticsearch mapping.
Recommended for small data sets, because you will not waste storage space for marginal performance gain since each shard take up space in your hardware.
If you are asking what is too much data vs small data? Typically it depends on the processor speed and the RAM of your hardware, the amount of data you store within each variable in your mapping for Elasticsearch and your query requirements; using many facets in your queries is going to slow down your response time significantly. There is no straightforward answer to this and you will have to benchmark according to your needs.
Although Jonathan's answer was correct at the time, the world has moved on and it now seems that the people behind ElasticSearch have a long term plan to drop support for multiple types:
Where we want to get to: We want to remove the concept of types from Elasticsearch, while still supporting parent/child.
So for new projects, using only a single type per index will make the eventual upgrade to ElasticSearch 6.x be easier.
Jonathan's answer is great. I would just add few other points to consider:
number of shards can be customized per solution you select. You may have one index with 15 primary shards, or split it to 3 indexes for 5 shards - performance perspective won't change (assuming data are distributed equally)
think about data usage. Ie. if you use kibana to visualize, it's easier to include/exclude particular index(es), but types has to be filtered in dashboard
data retention: for application log/metric data, use different indexes if you require different retention period
Both the above answers are great!
I am adding an example of several types in an index.
Suppose you are developing an app to search for books in a library.
There are few questions to ask to the Library owner,
Questions:
How many books are you planning to store?
What kind of books are you going to store in the library?
How are you going to search for books?
Answers:
I am planning to store 50 k – to 70 k books (approximately)
I will have 15 k -20 k technology related books (computer science, mechanical engineering, chemical engineering and so on), 15 k of historical books, 10 k of medical science books. 10 k of language related books (English, Spanish and so on)
Search by authors first name, author last name, year of publish, name of the publisher. (This gives you the idea about what information you should store in the index)
From the above answers we can say the schema in our index should look somewhat like this.
//This is not the exact mapping, just for the example
"yearOfPublish":{
"type": "integer"
},
"author":{
"type": "object",
"properties": {
"firstName":{
"type": "string"
},
"lastName":{
"type": "string"
}
}
},
"publisherName":{
"type": "string"
}
}
In order to achieve the above we can create one index called Books and can have various types.
Index: Book
Types: Science, Arts
(Or you can create many types such as Technology, Medical Science, History, Language, if you have lot more books)
Important thing to note here is the schema is similar but the data is not identical. And the other important thing is the total data you are storing.
Hope the above helps when to go for different types in an Index, if you have different schema you should consider different index. Small index for less data . big index for big data :-)

How to improve the performance for indexing data in cassandra

Cassandra doesn't have some CQL like like clause.... in MySQL to search a more specific data in database.
I have looked through some data and came up some ideas
1.Using Hadoop
2.Using MySQL server to be my anther database server
But is there any ways I can improve my Cassandra DB performance easier?
Improving your Cassandra DB performance can be done in many ways, but I feel like you need to query the data efficiently which has nothing to do with performance tweaks on the db itself.
As you know, Cassandra is a nosql database, which means when dealing with it, you are sacrificing flexibility of queries for fast read/writes and scalability and fault tolerance. That means querying the data is slightly harder. There are many patterns which can help you query the data:
Know what you are needing in advance. As querying with CQL is slightly less flexible than what you could find in a RDBMS engine, you can take advantage of the fast read-writes and save the data you want to query in the proper format by duplicating it. Too complex?
Imagine you have a user entity that looks like that:
{
"pk" : "someTimeUUID",
"name": "someName",
"address": "address",
"birthDate": "someBirthDate"
}
If you persist the user like that, you will get a sorted list of users in the order they joined your db (you persisted them). Let's assume you want to get the same list of users, but only of those who are named "John". It is possible to do that with CQL but slightly inefficient. What you could do here to amend this problem is to de-normalize your data by duplicating it in order to fit the query you are going to execute over it. You can read more about this here:
http://arin.me/blog/wtf-is-a-supercolumn-cassandra-data-model
However, this approach seems ok for simple queries, but for complex queries it is somewhat hard to achieve and also, if you are unsure what you are going to query in advance, there is no way you store the data in the proper manner beforehand.
Hadoop comes to the rescue. As you know, you can use hadoop's map reduce to solve tasks involving a large amount of data, and Cassandra data, by my experience, can become very very large. With hadoop, to solve the above example, you would iterate over the data as it is, in each map method to find if the user is named John, if so, write to context.
Here is how the pseudocode would look:
map<data> {
if ("John".equals(data.getColumn("name")){
context.write(data);
}
}
At the end of the map method, you would end up with a list of all users who are named John. Youl could put a time range (range slice) on the data you feed to hadoop which will give you
all the users who joined your database over a certain period and are named John. As you see, here you are left with a lot more flexibility and you can do virtually anything. If the data you got was small enough, you could put it in some RDBMS as summary data or cache it somewhere so further queries for the same data can easily retrieve it. You can read more about hadoop in here:
http://hadoop.apache.org/

Resources