Can dapper take a scalar (and so anonymous) parameter? - dapper

Can I do something like
db.Query<Thing>("Select * from Things where id=?", 1)
with Dapper, or is param with named fields —
db.Query<Thing>("Select * from Things where id= #id", new {id=1});
the only valid syntax?

Well, seems like the short answer is, No.
You must use new{id=1} or somesuch

Related

How to search in array with LIKE operator

id | name | ipAddress
----+----------+-------------------------
1 | testname | {192.168.1.60,192.168.1.65}
I want to search ipAddress with LIKE. I tried:
{'$mac_ip_addresses.ip_address$': { [OP.contains]: [searchItem]}},
This one also:
{'$mac_ip_addresses.ip_address$': { [OP.Like] : { [OP.any]: [searchItem]}}},
The data type of ipAddress is text[]. I want to search in ipAddress with LIKE.
searchItem contains the IP that need to be searched in the ipAddress field so I want to search in array with LIKE.
I don't know Sequelize but I can answer from postgres side.
There is no short syntax to search for a pattern inside array in PostgreSQL.
If you want to check pattern for each array element individually, then you need to unfold the array using unnest:
SELECT id, name, ipaddress
FROM testing
WHERE EXISTS (
SELECT 1 FROM unnest(ipaddress) AS ip
WHERE ip LIKE '8.8.8.%'
);
If the array is frequently searched this way, it's better to store the data in normalized form.
However, there is a short syntax (plus GIN index support) for for equality based search (see #> and other operators here).
SELECT id, name, ipaddress
FROM testing
WHERE ipaddress #> ARRAY['8.8.8.8'];
What you asked
~~ is the operator used internally to implementing SQL LIKE. There is no commutator for it - no operator that works with left and right operand switched.
That's the one you'd need for your attempt to use the ANY construct with the pattern to the left. Related:
You can create the operator, though, and it's pretty simple:
CREATE OR REPLACE FUNCTION reverse_like (text, text)
RETURNS boolean LANGUAGE sql IMMUTABLE PARALLEL SAFE AS
'SELECT $2 LIKE $1';
CREATE OPERATOR <~~ (function = reverse_like, leftarg = text, rightarg = text);
Inspired by Jeff Janes' idea here:
Match string pattern to any array element
Then your query can have the pattern to the left of the operator:
SELECT *
FROM mac_ip_addresses
WHERE '192.168.2%.255' <~~ ANY (ipaddress);
Simple, but considerably slower than the EXISTS expression demonstrated by filiprem.
Then again, either query is excruciatingly slow for big tables, since neither can use an index. A normalized DB design with a n:1 table holding one IP each would allow that. It would also occupy several times the space on disk. Still, the much cleaner implementation ...
While stuck with your current design, there is still a way: create a trigram GIN index on a text representation of the array and add a redundant, "sargable" predicate to the query additionally. Confused? Here's the recipe:
First, trigram indexes? Read this if you are not familiar:
PostgreSQL LIKE query performance variations
Neither the cast from text[] to text nor array_to_string() are immutable. But we need that for an expression index. Long story short, fake it with an immutable wrapper function:
CREATE OR REPLACE FUNCTION f_textarr2text(text[])
RETURNS text LANGUAGE sql IMMUTABLE AS $$SELECT array_to_string($1, ',')$$;
CREATE INDEX iparr_trigram_idx ON iparr
USING gin (f_textarr2text(iparr) gin_trgm_ops);
Related answer with the long story (and why it's safe):
Indexing an array for full text search
Then your query can be:
SELECT *
FROM mac_ip_addresses
WHERE NOT ('192.168.9%.255' <~~ ANY (ipaddress))
AND f_textarr2text(ipaddress) LIKE '192.168.9%.255'; -- logically redundant
The added predicate is logically redundant, but can tap into the power of the trigram index.
Much faster for big tables. Still a bit faster, yet:
SELECT *
FROM mac_ip_addresses
WHERE EXISTS (SELECT FROM unnest(ipaddress) ip WHERE ip LIKE '192.168.9%.255')
AND f_textarr2text(ipaddress) LIKE '192.168.9%.255';
But that's minor now.
db<>fiddle here
I addressed the question asked, as I took an interest. Might be of interest to the general public. Most probably not what you need, though.
What you need
I want to search in ipAddress with LIKE. searchItem contains the IP that need to be searched in the ipAddress field so I want to search in array with LIKE.
That should probably read:
"I want to search a given IP address (searchItem) in the array ipAddress. My first idea is to use LIKE ..."
Well, LIKE is for pattern matching. To find complete IP addresses in an array, it's the wrong tool. filiprem's second query with array operators is the way to go. Probably good enough.
Using the built-in data type cidr instead of text would be better. And the ip4 data type of the additional ip4r module would be much better, yet. All in combination with standard array operators like demonstrated.
Finally, converting IPv4 addresses to integer and using that with the additional inrarray module should be stellar - as far as performance is concerned.

Django query filter using large array of ids in Postgres DB

I want to pass a query in Django to my PostgreSQL database. When I filter my query using a large array of ids, the query is very slow and goes up to 70s.
After looking for an answer I saw this post which gives a solution to my problem, simply change the ARRAY [ids] in IN statement by VALUES (id1), (id2), ....
I tested the solution with a raw query in pgadmin, the query goes from 70s to 300ms...
How can I do the same command (i.e. not using an array of ids but a query with VALUES) in Django?
I found a solution building on #erwin-brandstetter answer using a custom lookup
from django.db.models import Lookup
from django.db.models.fields import Field
#Field.register_lookup
class EfficientInLookup(Lookup):
lookup_name = "ineff"
def as_sql(self, compiler, connection):
lhs, lhs_params = self.process_lhs(compiler, connection)
rhs, rhs_params = self.process_rhs(compiler, connection)
params = lhs_params + rhs_params
return "%s IN (SELECT unnest(%s))" % (lhs, rhs), params
This allows to filter like this:
MyModel.objects.filter(id__ineff=<list-of-values>)
The trick is to transform the array to a set somehow.
Instead of (this form is only good for a short array):
SELECT *
FROM tbl t
WHERE t.tbl_id = ANY($1);
-- WHERE t.tbl_id IN($1); -- equivalent
$1 being the array parameter.
You can still pass an array like you had it, but unnest and join. Like:
SELECT *
FROM tbl t
JOIN unnest($1) arr(id) ON arr.id = t.tbl_id;
Or you can keep your query, too, but replace the array with a subquery unnesting it:
SELECT * FROM tbl t
WHERE t.tbl_id = ANY (SELECT unnest($1));
Or:
SELECT * FROM tbl t
WHERE t.tbl_id IN (SELECT unnest($1));
Same effect for performance as passing a set with a VALUES expression. But passing the array is typically much simpler.
Detailed explanation:
IN vs ANY operator in PostgreSQL
How to use ANY instead of IN in a WHERE clause with Rails?
Optimizing a Postgres query with a large IN
Is this an example of the first thing you're asking?
relation_list = list(ModelA.objects.filter(id__gt=100))
obj_query = ModelB.objects.filter(a_relation__in=relation_list)
That would be an "IN" command because you're first evaluating relation_list by casting it to a list, and then using it in your second query.
If instead you do the exact same thing, Django will only make one query, and do SQL optimization for you. So it should be more efficient that way.
You can always see the SQL command you'll be executing with obj_query.query if you're curious what's happening under the hood.
Hope that answers the question, sorry if it doesn't.
I had lots of trouble to make the custom lookup 'ineff' work.
I may have solved it, but would love some validation from Django and Postgres experts.
1) Using it 'directly' on a ForeignKey field (ModelB)
ModelA.objects.filter(ModelB__ineff=queryset_ModelB)
Throws the following exception:
"Related Field got invalid lookup: ineff"
ForeignKey fields cannot be used with custom lookups.
A similar issue is reported here:
Custom lookup is not being registered in Django
2) Using it 'indirectly' on the pk field of related model (ModelB.id)
ModelA.objects.filter(ModelB__id__ineff=queryset_ModelB.values_list('id', flat=True))
Throws the following exception:
"can only concatenate list (not "tuple") to list"
Looking at Django Traceback, I noticed that rhs_params is a tuple.
Yet we try to add it to lhs_params (a list) in our custom lookup.
Hence I changed:
params = lhs_params + rhs_params
into:
params = lhs_params + list(rhs_params)
3) I then got a Postgres error (at least I had passed Django ORM)
"function unnest(uuid) does not exist"
"HINT: No function matches the given name and argument types. You might need to add explicit type casts."
I apparently solved it by changing the sql:
from:
return "%s IN (SELECT unnest(%s))" % (lhs, rhs), params
to:
return "%s IN (SELECT unnest(ARRAY(%s)))" % (lhs, rhs), params
Hence my final as_sql method looks like this:
def as_sql(self, compiler, connection):
lhs, lhs_params = self.process_lhs(compiler, connection)
rhs, rhs_params = self.process_rhs(compiler, connection)
params = lhs_params + list(rhs_params)
return "%s IN (SELECT unnest(ARRAY(%s)))" % (lhs, rhs), params
It seems to work, and is indeed faster than in__ (tested with EXPLAIN ANALYZE in Postgres).
But I would love to have some validation from experts, perhaps Erwin Brandstetter?
Thanks for your input.

query with like clause in scalikejdbc

Can anyone please give me a example for how to use like clause in scalikejdbc with dynamic value. I used following query but it did not work
sql"select * from tables_list where lower(TABLE_NAME) like '%$tableName.toLowerCase%'"
scalikejdbc build in prevent sql injection, therefore when you type like '%$tableName.toLowerCase%', it appear as like '%'urValue'%', hence error occur.
I found a way to go ground it, which is
def search(name:String){
val searchName = s"%$name%"
DB readOnly{ implicit session =>
sql"select * from db where name like $searchName".map
...
...
}
I hope this can help you.

mybatis- 3.1.1. how to override the resultmap returned from mybatis

we using mybatis 3.1.1.
we found for oracle the result map returned contains column name in Capital letters and in case of mySql the result map returned contains column name in small letters.
My question is : Is there is any way i can to write some sort of interceptor so that i can modify the result returned by result map.
Thanks.
I'm afraid the answer is that MyBatis doesn't provide any direct way to control the case of the keys in a result map. I asked this question recently on the MyBatis Google Group: https://groups.google.com/forum/?fromgroups#!topic/mybatis-user/tETs_JiugNE
The outcome is dependent on the behavior of the JBDC driver.
It also turns out that doing column aliasing as suggested by #jddsantaella doesn't work in all cases. I've tested MyBatis-3.1.1 with three databases in the past: MySQL, PostgreSQL and H2 and got different answers. With MySQL, the case of the column alias does dictate the case of the key in the hashmap. But with PostgreSQL it is always lowercase and with H2, it is always uppercase. I didn't test whether column aliases will work with Oracle, but by default it appears to return capital letters.
I see two options:
Option 1: Create some helper method that your code will always use to pull the data out of the returned map. For example:
private Object getFromMap(Map<String, Object> map, String key) {
if (map.containsKey(key.toLowerCase())) {
return map.get(key.toLowerCase());
} else {
return map.get(key.toUpperCase());
}
}
Option 2: Write a LowerCaseMap class that extends from java.util.AbstractMap or java.util.HashMap and wrappers all calls to put, putAll and/or get to always be lower case. Then specify that MyBatis should use your specific LowerCaseMap rather than a standard HashMap, when populating the data from the query.
If you like this idea and want help on how to tell MyBatis how to use a different concrete collection class, see my answer to this StackOverflow question: https://stackoverflow.com/a/11596014/871012
What if you modify the query so you get the exactly column name you need? For example:
select my_column as MY_COLUMN from ...
The idea of a LowerCaseMap as the ResultType is sound, but you can probably avoid writing your own. In my case I'm using org.apache.commons.collections4.map.CaseInsensitiveMap:
<select id="getTableValues"
resultType="org.apache.commons.collections.map.CaseInsensitiveMap">
SELECT *
FROM my_table
WHERE seq_val=#{seq_val}
</select>

Mongoid Syntax Questions

1) Finding by instance object
Assuming I have the instance object called #topic. I want to retrieve the answers for this given topic. I was thinking I should be able to pass in :topics=>#topic, but i had to do the very ugly query below.
#answers = Answers.where(:topic_ids => {"$in" => [#topic.id]})
2) Getting the string representation of the id. I have a custom function (shown below). But shouldn't this be a very common requirement?
def sid
return id.to_s
end
If your associations are set up correctly, you should be able to do:
#topic.answers
It sounds like the above is what you are looking for. Make sure you have set up your associations correctly. Mongoid is very forgiving when defining associations, so it can seem that they are set up right when there is in fact a problem like mismatched names in references_many and referenced_in.
If there's a good reason why the above doesn't work and you have to use a query, you can use this simple query:
#answers = Answer.where(:topic_ids => #topic.id)
This will match any Answer record whose topic_ids include the supplied ID. The syntax is the same for array fields as for single-value fields like Answer.where(:title => 'Foo'). MongoDB will interpret the query differently depending on whether the field is an array (check if supplied value is in the array) or a single value (check if the supplied value is a match).
Here's a little more info on how MongoDB handles array queries:
http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-ValueinanArray

Resources