Dapper - Access Columns by Index - dapper

I am trying to use dapper.net against a legacy database, some of the stored procedures return results with multiple columns of the same name, but different data types.
The existing code simply ignores the name of the column and uses the column index on the result.
Is this possible using Dapper?

Related

How can i get a specific column value from multiple databases in T-SQL

I have more than 100 databases having a common table with a common column.
I need that column value from all the databases. Is there a way to get them all with a single query?
I'm using SSMS and all the databases using single user.

Why can't columnar databases like Snowflake and Redshift change the column order?

I have been working with Redshift and now testing Snowflake. Both are columnar databases. Everything I have read about this type of databases says that they store the information by column rather than by row, which helps with the massive parallel processing (MPP).
But I have also seen that they are not able to change the order of a column or add a column in between existing columns (don't know about other columnar databases). The only way to add a new column is to append it at the end. If you want to change the order, you need to recreate the table with the new order, drop the old one, and change the name of the new one (this is called a deep copy). But this sometimes can't be possible because of dependencies or even memory utilization.
I'm more surprised about the fact that this could be done in row databases and not in columnar ones. Of course, there must be a reason why it's not a feature yet, but I clearly don't have enough information about it. I thought it was going to be just a matter of changing the ordinal of the tables in the information_schema but clearly is not that simple.
Does anyone know the reason of this?
Generally, column ordering within the table is not considered to be a first class attribute. Columns can be retrieved in whatever order you require by listing the names in that order.
Emphasis on column order within a table suggests frequent use of SELECT *. I'd strongly recommend not using SELECT * in columnar databases without an explicit LIMIT clause to minimize the impact.
If column order must be changed you do that in Redshift by creating a new empty version of the table with the columns in the desired order and then using ALTER TABLE APPEND to move the data into the new table very quickly.
https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_TABLE_APPEND.html
The order in which the columns are stored internally cannot be changed without dropping and recreating them.
Your SQL can retrieve the columns in any order you want.
General requirement to have columns listed in some particular order is for the viewing purpose.
You could define a view to be in the desired column order and use the view in the required operation.
CREATE OR REPLACE TABLE CO_TEST(B NUMBER,A NUMBER);
INSERT INTO CO_TEST VALUES (1,2),(3,4),(5,6);
SELECT * FROM CO_TEST;
SELECT A,B FROM CO_TEST;
CREATE OR REPLACE VIEW CO_VIEW AS SELECT A,B FROM CO_TEST;
SELECT * FROM CO_VIEW;
Creating a view to list the columns in the required order will not disturb the actual table underneath the view and the resources associated with recreation of the table is not wasted.
In some databases (Oracle especially) ordering columns on table will make difference in performance by storing NULLable columns at the end of the list. Has to do with how storage is beiing utilized within the data block.

SSIS Cannot map the lookup column. NVARCHAR(MAX) error

I am writing ETL. I have created View in my source database. My View is a join of two tables. Now, I need to fetch data from View. But there are two columns in View which have nvarchar(max) data type.
But when I perform lookup operation in DFT, I am facing this error:
Cannot map the lookup column, 'Description', because the column data type is a binary large object block (BLOB).
I have seen following links:
SSIS Lookup By NVARCHAR(MAX) Column
SSIS Lookup with Derived Columns
Note that, Description column may have large amount of text.
Image is attached for reference. Thank You!
What you is a lookup, and the lookup transformation supports join columns with any data type, except for DT_R4, DT_R8, DT_TEXT, DT_NTEXT, or DT_IMAGE (i.e. BLOB's)
Personally I try to avoid handling BLOB's as much as possible in SSIS. Convert and treat the BLOB as a nvarchar with a max value, and you should be fine.
You might get this problem if the column based on which you are comparing with in the lookup table have a been assigned different constraints.
Ex: If custid in source table allows NULL but the custid of target table does not allows NULL . you might get this error.

Sql Server - changing column from UniqueIdentifier to varchar(128) - backwards compatability?

I'm making changes to a Sql Server (2008) database to change an existing column type from a UUID to a varchar(128) - the column, we'll call it reference was originally used to save an ID of a reference project that provided more information about the data in the row, but now the business requirements have changed and we're just going to allow free-form text instead. There were no foreign keys setup against the column, so there won't be any breaking relationships.
My biggest concern at the moment is one of backwards compatibility with existing stored procedures. Unfortunately, the database is very big and very old, and contains hundreds of stored procs littered throughout the design. I'm very nervous about making a change which could potentially break existing stored procs.
I know this is a loaded question, but in general, can UUID columns be converted into varchar columns without deleterious effects? It seems like any existing stored proc which inserted into or queried on the UUID column would still work, given that UUIDs can be represented as strings.
I've tried performed the below steps and I didn't see any issues, so i think you can go ahead and change the column datatype:
Created the table with the column type as unique identifier.
Created the stored procedure to insert a value into that table through NewID() function.
Execute the stored procedure, and data was inserted without any issue.
Now, I changed the column type to varchar and again executed the stored procedure.
Procedure ran fine without any issue, and data was inserted.
So, the answer is Yes, you can change the column data type.

Which databases can create function-based indexes?

I know, that this type of indexes exist in Oracle. Old versions on MySQL can not create function-based indexes (thanks, google). And new versions? How about PostgreSQL, SQL Server etc?
I don't know inner details of oracle's, but postgres can create index on expression, which can be a function, from :
An index field can be an expression
computed from the values of one or
more columns of the table row. This
feature can be used to obtain fast
access to data based on some
transformation of the basic data. For
example, an index computed on
upper(col) would allow the clause
WHERE upper(col) = 'JIM' to use an
index.
EDIT:
MySQL seems to still be forging this, see virtual columns for details. Also some discussions here. Don't seem very active.
DB2 does it.
MS SQL can not do it, but using computed columns you can have similar effects; see discussion.
PostgreSQL can create indexes on expressions including functions: Indexes on Expressions
You could emulate function based indexes if your database supports insert and update triggers.
Add a column that will contain the function values and add index for that column. Then have your triggers to update column containing function values. Your queries have to change, replace function(params) with function_col.

Resources