Dapper.NET: should variable names be the same as parameter names? - dapper

I use Dapper.NET, should variable names be the same as parameter names?
For example:
int id = 123;
string name = "abc";
connection.Execute("insert [KeyLookup](Id, Name) values(#id, #name)",
new { id, name });
This works fine.
But what to do if I have parameters with different names:
int user_id = 123;
string user_name = "abc";
connection.Execute("insert [KeyLookup](Id, Name) values(#id, #name)",
new { user_id, user_name }); // -?

Then you just substitute the names in the parameter object:
int user_id = 123;
string user_name = "abc";
connection.Execute("insert [KeyLookup](Id, Name) values(#id, #name)",
new { id = user_id, name = user_name }); // -?

Related

What are the options available to get Primary Key Column Names in Snowflake?

I need to fetch all Primary Keys, their parent Table Name , Column Name, and Schema Name together.
I am using INFORMATION_SCHEMA for all metadata fetching, SHOW PRIMARY KEYS/DESCRIBE TABLE does the job but it's not an option here.
Need something similar to SELECT *FROM DB.INFORMATION_SCHEMA.XXX.
What are the options we have here?
*I am Using JDBC
You may consider using: getPrimaryKeys(String, String, String)
Details: https://docs.snowflake.com/en/user-guide/jdbc-api.html#object-databasemetadata
A while back, I wrote a user defined table function (UDTF) to get the PK column(s) for a single table, each column in the PK as a single row in the return. I extended it to return a table with all the columns in PKs in an entire database.
Once you create the UDTF, you can get all the PKs for a database like this:
select * from table(get_pk_columns(get_ddl('database', 'MY_DB_NAME')));
It will return a table with columns for the schema name, table name, and column name(s). Note that if there's a composite PK, it shows in the table as one row per column. You can of course use an aggregate function such as listagg() to change that into a single row with the columns names of the composite PK separated by commas.
It's possible that if you have a very large number of tables/columns in your database, the return of the GET_DDL() function will be too large to fit into the 16mb limit. If it does fit, this should return the results quickly.
/********************************************************************************************************
* *
* User defined table function (UDTF) to get all primary keys for a database. *
* *
* #param {string}: DATABASE_DDL The DDL for the database to get the PKs. Usually use GET_DDL() *
* #return {table}: A table with the columns comprising the table's primary key *
* *
********************************************************************************************************/
create or replace function GET_PK_COLUMNS(DATABASE_DDL string)
returns table ("SCHEMA_NAME" string, "TABLE_NAME" string, PK_COLUMN string)
language javascript
as
$$
{
processRow: function get_params(row, rowWriter, context){
var startTableLine = -1;
var endTableLine = -1;
var dbDDL = row.DATABASE_DDL.replace(/'[\s\S]*'/gm, '')
var lines = dbDDL.split("\n");
var currentSchema = "";
var currentTable = "";
var ln = 0;
var tableDDL = "";
var pkCols = null;
var c = 0;
for (var i=0; i < lines.length; i++) {
if (lines[i].match(/^create .* schema /)) {
currentSchema = lines[i].split("schema")[1].replace(/;/, '');
//rowWriter.writeRow({PK_COLUMN: "currentSchema = " + currentSchema});
}
if (lines[i].match(/^create or replace TABLE /)) {
startTableLine = i;
}
if (startTableLine != -1 && lines[i] == ");") {
endTableLine = i;
}
if (startTableLine != -1 && endTableLine != -1) {
// We found a table. Now, join it and send it for parsing
tableDDL = "";
for (ln = startTableLine; ln <= endTableLine; ln++) {
if (ln > 0) tableDDL += "\n";
tableDDL += lines[ln];
}
startTableLine = -1;
endTableLine = -1;
currentTable = getTableName(tableDDL);
pkCols = getPKs(tableDDL);
for (c = 0; c < pkCols.length; c++) {
rowWriter.writeRow({PK_COLUMN: pkCols[c], SCHEMA_NAME: currentSchema, TABLE_NAME: currentTable});
}
}
}
function getTableName(tableDDL) {
var lines = tableDDL.split("\n");
var s = lines[1];
s = s.substring(s.indexOf(" TABLE ") + " TABLE ".length);
s = s.split(" (")[0];
return s;
}
function getPKs(tableDDL) {
var c;
var keyword = "primary key";
var ins = -1;
var s = tableDDL.split("\n");
for (var i = 0; i < s.length; i++) {
ins = s[i].indexOf(keyword);
if (ins != -1) {
var colList = s[i].substring(ins + keyword.length);
colList = colList.replace("(", "");
colList = colList.replace(")", "");
var colArray = colList.split(",");
for (pkc = 0; c < colArray.length; pkc++) {
colArray[pkc] = colArray[pkc].trim();
}
return colArray;
}
}
return []; // No PK
}
}
}
$$;
I did this using a very simple SQL based UDTF:
CREATE OR REPLACE FUNCTION admin.get_primary_key(p_table_nm VARCHAR)
RETURNS TABLE(column_name VARCHAR, ordinal_position int)
AS
WITH t AS (select get_ddl('TABLE', p_table_nm) tbl_ddl)
, t1 AS (
SELECT POSITION('primary key (', tbl_ddl) + 13 pos
, SUBSTR(tbl_ddl, pos, POSITION(')', tbl_ddl, pos) - pos ) str
FROM t
)
SELECT x.value column_name
, x.index ordinal_position
FROM t1
, LATERAL SPLIT_TO_TABLE(t1.str, ',') x
;
You can then query this in a SQL statement:
select *
FROM TABLE(admin.get_primary_key('<your table name>'));
Unfortunately, due to the odd implementation of GET_DDL(), it will only accept a string literal and you can't use this function with a lateral join to information_schema.tables. Get the following error:
SQL compilation error: Invalid value [CORRELATION(T.TABLE_SCHEMA) ||
'.' || CORRELATION(T.TABLE_NAME)] for function '2', parameter
EXPORT_DDL: constant arguments expected

How to read an array of dates Date[] from a ResultSet for use with java.time classes?

Screenshot from Eclipse.
I'm trying to read from my ResultSet variable an attribute which is an array of dates Date [], but I can't do it with any of the available functions. Could anyone kindly help me? Thank you.
a ResultSet function that reads an array of Dates does not exist, so since I had a daterange [] in postgresql, I replaced it with LocalDate []. I found a file on GitHub that implements the range, but the import doesn't work for me.
public static ArrayList<Parlamentare> elenco_Parlamentari()throws
NullPointerException{
String url = "jdbc:postgresql://localhost/Parlamento"; //cambia mydb
String user = "postgres";
String password = "";
ArrayList<Parlamentare> elenco = new ArrayList<Parlamentare>();
Range r;
Statement st;
ResultSet rs;
String sql;
try(Connection cn = DriverManager.getConnection(url, user, password);)
{
if(cn != null) {
System.out.println("Connected to PostgreSQL server
successfully!");
}else {
System.out.println("Failed to connect PostgreSQL server");
}
sql = "SELECT
nome,partito,circoscrizione,data_nascita,luogo,titolo_studi,
mandati,commissioni,periodo_carica FROM parlamentari;";
st = cn.createStatement(); // creo sempre uno statement sulla
// connessione
String nome = "";
String partito = "";
String circoscrizione = "";
Date data_nascita = null;
String luogo = null;
String titolo_studi = "";
String[] mandati = null;
String[] commissioni = null;
LocalDate[] periodo_carica = null;
// LocalDate localDate = null;
rs = st.executeQuery(sql); // faccio la query su uno statement
while (rs.next() == true) {
try {
Parlamentare a=new Parlamentare();
nome = rs.getString("nome");
partito = rs.getString("partito");
circoscrizione = rs.getString("circoscrizione");
data_nascita = rs.getDate("data_nascita");
luogo = rs.getString("luogo");
titolo_studi = rs.getString("titolo_studi");
if(rs.getArray("mandati") == null)
mandati=null;
else mandati=rs.getArray("mandati").toString().split(",");
if(rs.getArray("commissioni") == null)
commissioni=null;
else
commissioni=rs.getArray("commissioni").toString().split(",");
rs.getObject("periodo_carica").getClass();
// LocalDate localDate =
rs.getObject(1, LocalDate.class);
// case "daterange":
// return Range.localDateRange(value);
rs.getType()==
if(rs.getArray("periodo_carica") == null)
periodo_carica = null;
else periodo_carica =
rs.getObject("periodo_carica").getClass();
//rs.getArray("periodo_carica").toString();
// getDate();
// getObject("periodo_carica").;
/// toString().;// toString().split(",");
// .getDate("periodo_carica");//.toString().split(",");
System.out.print("rs.getObject(\"periodo_carica\").getClass()="+
rs.getObject("periodo_carica").getClass());
//rs.getObject(...).getClass()
a = new
Parlamentare(nome,partito,circoscrizione,data_nascita,
luogo,titolo_studi,
mandati,commissioni,periodo_carica);
elenco.add(a);
}
catch(NullPointerException obj) {
obj.printStackTrace();
}
}
cn.close(); // chiusura connessione
} catch (SQLException e) {
System.out.println("errore:" + e.getMessage());
e.printStackTrace();
}
return elenco;
} //end elenco_Parlamentari()
in postgresql:
CREATE TABLE public.parlamentari (
nome character varying(100) COLLATE pg_catalog."default" NOT NULL,
partito character varying(100) COLLATE pg_catalog."default" NOT NULL,
circoscrizione character varying(100) COLLATE pg_catalog."default"
NOT NULL,
data_nascita date,
luogo character varying(100) COLLATE pg_catalog."default",
titolo_studi character varying(100) COLLATE pg_catalog."default",
mandati character varying(1000)[] COLLATE pg_catalog."default",
commissioni character varying(100)[] COLLATE pg_catalog."default",
periodo_carica daterange[],
CONSTRAINT parlamentari_pkey PRIMARY KEY (nome, partito,
circoscrizione),
CONSTRAINT parlamentarinomekey UNIQUE (nome),
CONSTRAINT parlamentaripartitonomekey UNIQUE (partito, nome)
)
TABLESPACE pg_default;
ALTER TABLE public.parlamentari
OWNER to postgres;
GRANT ALL ON TABLE public.parlamentari TO postgres;
GRANT ALL ON TABLE public.parlamentari TO PUBLIC;
Parlamentare.java
public Parlamentare() {
String nome = "";
String partito = "";
String circoscrizione = "";
Date data_nascita = null;
String luogo = null;
String titolo_studi = "";
String[] mandati = null;
String[] commissioni = null;
LocalDate[] periodo_carica = null;
}
Sample Query Output
In the event you can't make your JDBC library work with the daterange type from PG, perhaps you can re-write the query to convert the daterange DB objs into strings (so an ARRAY but of type string) OR just a combined string - see pg array_to_string fn - in which case you process the resultant string in Java land.

ASP.NET Core MVC and Entity Framework output param not returning when calling stored procedure

I have a procedure which returns the identity of the record added. I am using Entity Framework to call the procedure and retrieve the value, but it is always 0.
This is the code - can you figure out why it is not returning the identity value?
C# Entity Framework domain code:
var cNumber = new SqlParameter("CNumber", acctSl.cNumber);
var fId = new SqlParameter("FId", acctSl.FId);
var splAmt = new SqlParameter("SplAmt", acctSl.SplAmt);
var frDt = new SqlParameter("FrDt", acctSl.FrDate);
var toDt = new SqlParameter("ToDt", acctSl.ToDate);
var user = new SqlParameter("User", acctSl.User);
var id = new SqlParameter("Id", "")
{
Direction = ParameterDirection.Output,
SqlDbType = SqlDbType.VarChar
};
var sql = "EXECUTE [dbo].[InsertAcctSpl] #CNumber, #FID, #SplAmt, #FrDt, #ToDt, #User, #Id OUTPUT";
var result = DbContext.Database.ExecuteSqlRaw(sql, cNumber, fId, splAmt, frDt, toDt, user, id);
int rowsAffected;
var yourOutput = Convert.ToInt32(id.Value);
if (result > 0)
{
acctSl.AcctId = yourOutput;
}
else
{
acctSl.AcctId = 0;
}
SQL Server procedure:
ALTER PROCEDURE [dbo].[InsertAccountsSpend]
#CNumber varchar(15),
#FId bigint,
#SplAmt money,
#FrDt date,
#ToDt date,
#User bigint,
#Id bigint OUTPUT
AS
INSERT INTO AcctSpend (CNmbr, FID, SplAmt, FrDt, ToDt,
Cr8Dt, Cr8User_ID, UpdtDt, UpdtUser_ID)
VALUES (#CNumber, #FId, #splAmt, #FroDt,# ToDt,
GETDATE(), #User, GETDATE(), #User)
SET #id = SCOPE_IDENTITY()
RETURN #id
The issue was with the data type it needed be long
var id = new SqlParameter("Id", "")
{
Direction = ParameterDirection.Output,
**SqlDbType = SqlDbType.BigInt**
};
Your problem seems to be with this section of your code:
int rowsAffected;
var yourOutput = Convert.ToInt32(id.Value);
if (rowsAffected > 0)
{
acctSl.AcctId = yourOutput;
}
else
{
acctSl.AcctId = 0;
}
You are basing your if-else logic off of the rowsAffected variable, but that variable is never assigned the value from your stored procedures output. Since rowsAffected is declared as an int type variable, it cannot be null, so it is automatically set to 0.
To get the actual value for rowsAffected you will need to utilize the data returned in your result variable that you have declared here:
var result = DbContext.Database.ExecuteSqlRaw(sql, cNumber, fId, splAmt, frDt, toDt, user, id);
EDIT
It appears that the syntax around your id SQL parameter object is incorrect. Try creating this object in the following manner:
var id = new SqlParameter
{
ParameterName = "Id",
DbType = System.Data.DbType.String,
Direction = System.Data.ParameterDirection.Output
};

How to use IN Clause for list of strings or GUID's in Dapper

I am trying to write a dapper query for IN clause, but it's not working throwing casting error saying "Conversion failed when converting the nvarchar value 'A8B08B50-2930-42DC-9DAA-776AC7810A0A' to data type int." . In below query fleetAsset is Guid converted into string.
public IQueryable<MarketTransaction> GetMarketTransactions(int fleetId, int userId, int rowCount)
{
//Original EF queries which I am trying to convert to Dapper
//var fleetAsset = (from logicalFleetNode in _context.LogicalFleetNodes
// where logicalFleetNode.LogicalFleetId == fleetId
// select logicalFleetNode.AssetID).ToList();
////This query fetches guid of assetprofiles for which user having permissions based on the assets user looking onto fleet
//var assetProfileIds = (from ap in _context.AssetProfileJoinAccounts
// where fleetAsset.Contains(ap.AssetProfile.AssetID) && ap.AccountId == userId
// select ap.AssetProfileId).ToList();
var fleetAsset = _context.Database.Connection.Query<string>("SELECT CONVERT(varchar(36),AssetID) from LogicalFleetNodes Where LogicalFleetId=#Fleetid",
new { fleetId }).AsEnumerable();
//This query fetches guid of assetprofiles for which user having permissions based on the assets user looking onto fleet
var sql = String.Format("SELECT TOP(#RowCount) AssetProfileId FROM [AssetProfileJoinAccounts] AS APJA WHERE ( EXISTS (SELECT " +
"1 AS [C1] FROM [dbo].[LogicalFleetNodes] AS LFN " +
"INNER JOIN [dbo].[AssetProfile] AS AP ON [LFN].[AssetID] = [AP].[AssetID]" +
" WHERE ([APJA].[AssetProfileId] = [AP].[ID]) " +
" AND ([APJA].[AccountId] = #AccountId AND LogicalFleetId IN #FleetId)))");
var assetProfileIds = _context.Database.Connection.Query<Guid>(sql, new { AccountId = userId, FleetId = fleetAsset, RowCount=rowCount });
Dapper performs expansion, so if the data types match, you should just need to do:
LogicalFleetId IN #FleetId
(note no parentheses)
Passing in a FleetId (typically via an anonymous type like in the question) that is an obvious array or list or similar.
If it isn't working when you remove the parentheses, then there are two questions to ask:
what is the column type of LocalFleetId?
what is the declared type of the local variable fleetAsset (that you are passing in as FleetId)?
Update: test case showing it working fine:
public void GuidIn_SO_24177902()
{
// invent and populate
Guid a = Guid.NewGuid(), b = Guid.NewGuid(),
c = Guid.NewGuid(), d = Guid.NewGuid();
connection.Execute("create table #foo (i int, g uniqueidentifier)");
connection.Execute("insert #foo(i,g) values(#i,#g)",
new[] { new { i = 1, g = a }, new { i = 2, g = b },
new { i = 3, g = c },new { i = 4, g = d }});
// check that rows 2&3 yield guids b&c
var guids = connection.Query<Guid>("select g from #foo where i in (2,3)")
.ToArray();
guids.Length.Equals(2);
guids.Contains(a).Equals(false);
guids.Contains(b).Equals(true);
guids.Contains(c).Equals(true);
guids.Contains(d).Equals(false);
// in query on the guids
var rows = connection.Query(
"select * from #foo where g in #guids order by i", new { guids })
.Select(row => new { i = (int)row.i, g = (Guid)row.g }).ToArray();
rows.Length.Equals(2);
rows[0].i.Equals(2);
rows[0].g.Equals(b);
rows[1].i.Equals(3);
rows[1].g.Equals(c);
}

SQL Filter on multiple columns with possible 'all' value

Apologies if this question has been asked before. I have searched this website, but didn't find an answer.
I have a table with approx. 15 fields. Most of these are foreign keys to other tables (properties) like this:
Table MyObject
name nvarchar(500)
property_1_id int
property_2_id int
.....
property_15_id int
Table Property_1
id int
name nvarchar(50)
Table Property_2
id int
name nvarchar(50)
.. etc
Now I have to make an application that allows the user to filter on any of these properties, using a combination of dropdown lists. These lists contain the values of the other tables, and an extra value: 'All'.
How can I construct my query so it accepts these 15 fields as parameters, with the value either being a real value, or '-1' meaning 'all', and then filter the appropriate records?
Like this:
SELECT Property_1, Property_2 FROM MyObject
WHERE (Property_1 = #Property_1 OR #Property_1 = -1)
AND (Property_2 = #Property_2 OR #Property_2 = -1)
Add more AND statements for each of your Properties
Whenever a parameter is set to -1 the corresponding AND clause will always evaluate to true thus returning any value for that property.
A sample T-SQL Script with 5 parameters shown below:
CREATE PROCEDURE [dbo].[EnterYourSprocNameHere]
#property_1 INT = -1
,#property_2 INT = -1
,#property_3 INT = -1
,#property_4 INT = -1
,#property_5 INT = -1
AS
BEGIN
SELECT property_1, property_2, property_3, property_4, property_5
FROM MyObject
WHERE (Property_1 = #property_1 OR #property_1 = -1)
AND (Property_2 = #property_2 OR #property_2 = -1)
AND (Property_3 = #property_3 OR #property_3 = -1)
AND (Property_4 = #property_4 OR #property_4 = -1)
AND (Property_5 = #property_5 OR #property_5 = -1)
END
If you want to pass null instead of -1 then use this script:
CREATE PROCEDURE [dbo].[EnterYourSprocNameHere]
#property_1 INT = NULL
,#property_2 INT = NULL
,#property_3 INT = NULL
,#property_4 INT = NULL
,#property_5 INT = NULL
AS
BEGIN
SELECT property_1, property_2, property_3, property_4, property_5
FROM MyObject
WHERE (Property_1 = #property_1 OR #property_1 IS NULL)
AND (Property_2 = #property_2 OR #property_2 IS NULL)
AND (Property_3 = #property_3 OR #property_3 IS NULL)
AND (Property_4 = #property_4 OR #property_4 IS NULL)
AND (Property_5 = #property_5 OR #property_5 IS NULL)
END
If you can send NULL value instead of -1, you can use ISNULL function.
SELECT Property_1, Property_2, Property_3
FROM MyObject
WHERE Property_1 = ISNULL(#Property_1, Property_1)
AND Property_2 = ISNULL(#Property_2, Property_2)
AND Property_3 = ISNULL(#Property_3, Property_3)
...
..
.;

Resources