Catch errors with default for empty indexes - try-catch

I try to do:
db.table(table)
.max({index: 'number'})('number')
.default(0)
and receive error:
(node:45) UnhandledPromiseRejectionWarning: ReqlQueryLogicError: `max` found no entries in the specified index in:
r.db("db").table("table").max({"index": "number"})("number").default(0)
Is there a way to do it properly?

Your table is empty, or doesn't contain elements with the indexed field "number".
You can try the same query without using the number field as index.
r.db("DB").table("table")
.max('number')('number')
.default(0)
Or fill your table with elements. In this case using the field "number" as index brings you performance advantage.

Related

NOT IN within a cypher query

I'm attempting to find all values that match any item within a list of values within cypher. Similar to a SQL query with in and not in. I also want to find all values that are not in the list in a different query. The idea is I want to assign a property to each node that is binary and indicates whether the name of the node is within the predefined list.
I've tried the following code blocks:
MATCH (temp:APP) - [] -> (temp2:EMAIL_DOMAIN)
WHERE NOT temp2.Name IN ['GMAIL.COM', 'YAHOO.COM', 'OUTLOOK.COM', 'ICLOUD.COM', 'LIVE.COM']
RETURN temp
This block returns nothing, but should return a rather large amount of data.
MATCH (temp:APP) - [] -> (temp2:EMAIL_DOMAIN)
WHERE temp2.Name NOT IN ['GMAIL.COM', 'YAHOO.COM', 'OUTLOOK.COM', 'ICLOUD.COM', 'LIVE.COM']
RETURN temp
This code block returns an error in relation to the NOT's position. Does anyone know the correct syntax for this statement? I've looked around online and in the neo4j documentation, but there are a lot of conflicting ideas with version changes. Thanks in advance!
Neo4j is case sensitive so you need to check the data to ensure that the EMAIL_DOMAIN.Name is all upper case. If Name is mixed case, you can convert the name using toUpper(). If Name is all lower case, then you need to convert the values in your query.
MATCH (temp:APP) - [] -> (temp2:EMAIL_DOMAIN)
WHERE NOT toUpper(temp2.Name) IN ['GMAIL.COM', 'YAHOO.COM', 'OUTLOOK.COM', 'ICLOUD.COM', 'LIVE.COM']
RETURN temp

MongoDB does not allow duplicating arrays across documents

So the problem is when I have an array in a document with no values ( is empty []) and when I add another document with an array which does not contain any elements - I face this error message:
Failed to insert document.
Error:
Error when saving document: E11000 duplicate key error collection:
package.package index: collection_name dup key: { : undefined }
How do I allow duplicating values in different arrays across documents?
So the problem was that the array was indexed before and the setting stayed, even though it was recently removed. Simple command db.collection.getIndexes() displayed the index and db.collection.dropIndex("idxName") removed it. Currently the error does not get displayed anymore and I can now add duplicating values across documents.

(MDX) how to use current member name as another column?

Example MDX query from https://quartetfs.com/resource-center/mdx-query-basics:
SELECT
NON EMPTY {[ASIN].[ASIN].Members} ON ROWS,
NON EMPTY {[Category].[Category].[LCD]} ON COLUMNS
FROM [Amazon]
WHERE ( [Measures].[Gross.Profit],
[Time].[ALL].[AllMember].[2011].[5],
[Brand].[Brand].[LG])
How could one repeat the ASIN field (pink column) in another column?
I tried adding [ASIN].[ASIN] to ON COLUMNS expression
SELECT
NON EMPTY {[ASIN].[ASIN].Members} ON ROWS,
NON EMPTY {[Category].[Category].[LCD],[ASIN].[ASIN]} ON COLUMNS
FROM [Amazon]
WHERE ( [Measures].[Gross.Profit],
[Time].[ALL].[AllMember].[2011].[5],
[Brand].[Brand].[LG])
Which resulted in Two sets specified in the function have different dimensionality. Adding .CurrentMember resulted in the same error.
I tried adding ASIN property through new measure:
MEMBER Measures.ASIN AS [ASIN].[ASIN].CurrentMember
SELECT
NON EMPTY {[ASIN].[ASIN].Members} ON ROWS,
NON EMPTY {[Category].[Category].[LCD],Measures.ASIN} ON COLUMNS
FROM [Amazon]
WHERE ( [Measures].[Gross.Profit],
[Time].[ALL].[AllMember].[2011].[5],
[Brand].[Brand].[LG])
Which adds a new column with null values.
What I want to see is:
______________LCD________ASIN__________
B003D4WAVW 124,420.16 B003D4WAVW
...
Is there a way to achieve this?
Try this one:
MEMBER Measures.ASIN AS [ASIN].[ASIN].CurrentMember.Member_Name
MEMBER Measures.LCD AS ([Category].[Category].[LCD],[Measures].[Gross.Profit])
SELECT
NON EMPTY {[ASIN].[ASIN].Members} ON ROWS,
NON EMPTY {[Measures].[LCD],[Measures].[ASIN]} ON COLUMNS
FROM [Amazon]
WHERE ( [Time].[ALL].[AllMember].[2011].[5],
[Brand].[Brand].[LG])
You tried to use dimension and measure member on the same axis. I've transformed this into two measures.
Tested on my own data:

RethinkDB - Find documents with missing field

I'm trying to write the most optimal query to find all of the documents that do not have a specific field. Is there any better way to do this than the examples I have listed below?
// Get the ids of all documents missing "location"
r.db("mydb").table("mytable").filter({location: null},{default: true}).pluck("id")
// Get a count of all documents missing "location"
r.db("mydb").table("mytable").filter({location: null},{default: true}).count()
Right now, these queries take about 300-400ms on a table with ~40k documents, which seems rather slow. Furthermore, in this specific case, the "location" attribute contains latitude/longitude and has a geospatial index.
Is there any way to accomplish this? Thanks!
A naive suggestion
You could use the hasFields method along with the not method on to filter out unwanted documents:
r.db("mydb").table("mytable")
.filter(function (row) {
return row.hasFields({ location: true }).not()
})
This might or might not be faster, but it's worth trying.
Using a secondary index
Ideally, you'd want a way to make location a secondary index and then use getAll or between since queries using indexes are always faster. A way you could work around that is making all rows in your table have a value false value for their location, if they don't have a location. Then, you would create a secondary index for location. Finally, you can then query the table using getAll as much as you want!
Adding a location property to all fields without a location
For that, you'd need to first insert location: false into all rows without a location. You could do this as follows:
r.db("mydb").table("mytable")
.filter(function (row) {
return row.hasFields({ location: true }).not()
})
.update({
location: false
})
After this, you would need to find a way to insert location: false every time you add a document without a location.
Create secondary index for the table
Now that all documents have a location field, we can create a secondary index for location.
r.db("mydb").table("mytable")
.indexCreate('location')
Keep in mind that you only have to add the { location: false } and create the index only once.
Use getAll
Now we can just use getAll to query documents using the location index.
r.db("mydb").table("mytable")
.getAll(false, { index: 'location' })
This will probably be faster than the query above.
Using a secondary index (function)
You can also create a secondary index as a function. Basically, you create a function and then query the results of that function using getAll. This is probably easier and more straight-forward than what I proposed before.
Create the index
Here it is:
r.db("mydb").table("mytable")
.indexCreate('has_location',
function(x) { return x.hasFields('location');
})
Use getAll.
Here it is:
r.db("mydb").table("mytable")
.getAll(false, { index: 'has_location' })

System.QueryException: Non-selective query against large object type (more than 100000 rows)

I was working months with my class, but now give me the following error:
"System.QueryException: Non-selective query against large object type (more than 100000 rows). Consider an indexed filter or contact salesforce.com about custom indexing. Even if a field is indexed a filter might still not be selective when: 1. The filter value includes null (for instance binding with a list that contains null) 2. Data skew exists whereby the number of matching rows is very large (for instance, filtering for a particular foreign key value that occurs many times)"
In this Line:
usuarios=[select id,PersonEmail,marca__c,
Marcas_de_las_que_quiere_recibir_ofertas__c,dni__c,
Segmento__c,Datos_Preferencias__c,
Datos_Test_Compatibilidad__c,Datos_Test_Big_Five__c,
Datos_CV_Obligatorio__c, datos_contratacion__c,
Fecha_orientacion_a_marca__c,Resultado_orientacion_a_marca__c,
Fecha_formacion_online__c,Resultado_formacion_online__c
from Account
where
Segmento__c in :setsegmentos
and Marca_relacional__c in: listaMarcas
and Baja__c=false and Ya_estoy_trabajando__c=false
and Back_list__c=false
and Inactivo__c=false limit 3000];
My SOQL just take 3000 records. Anyone can help me please?
I made some change for an advice fromm chri (thank you!), but still the same error.

Resources