How to map SQL Server `varbinary(max)` field with NHibernate ByCode mapping? - sql-server

I have a class with a property of type byte[] that I would like to map to a varbinary(max) field in SQL Server using the new NHibernate ByCode mapping.
So far, using SchemaAction = SchemaAutoAction.Recreate in order to have NH create the schema, I've ended up with the following (the class property name is "Data"):
When mapping is not qualified in any way, I end up with a varbinary(8000) field
When mapping is map.Property(x => x.Data, m => m.Length(int.MaxValue)), I end up with an 'image' field (which, according to SQL Server docs, will not be supported in the next release of SQL Server)
When mapping is map.Property(x => x.Data, m => m.Type(TypeFactory.GetBinaryType(int.MaxValue)), I end up with a varbinary(8000) field, which just seems wrong
What am I missing?

I experienced the same problem and this has worked for me.
Property(e => e.Data, m => m.Column(cm => cm.SqlType("varbinary(MAX)")));

Related

Apache Spark: Type conversion problem on write using JDBC driver to SQL Server / Azure DWH for column of BINARY type

My initial goal is to save UUId values to SQL Server/Azure DWH to column of BINARY(16) type.
For example, I have demo table:
CREATE TABLE [Events] ([EventId] [binary](16) NOT NULL)
I want to write data to it using Spark like this:
import java.util.UUID
val uuid = UUID.randomUUID()
val uuidBytes = Array.ofDim[Byte](16)
ByteBuffer.wrap(uuidBytes)
.order(ByteOrder.BIG_ENDIAN)
.putLong(uuid.getMostSignificantBits())
.putLong(uuid.getLeastSignificantBits()
val schema = StructType(
List(
StructField("EventId", BinaryType, false)
)
)
val data = Seq((uuidBytes)).toDF("EventId").rdd;
val df = spark.createDataFrame(data, schema);
df.write
.format("jdbc")
.option("url", "<DATABASE_CONNECTION_URL>")
.option("dbTable", "Events")
.mode(org.apache.spark.sql.SaveMode.Append)
.save()
This code returns an error:
java.sql.BatchUpdateException: Conversion from variable or parameter type VARBINARY to target column type BINARY is not supported.
My question is how to cope with this situation and insert UUId value to BINARY(16) column?
My investigation:
Spark uses conception of JdbcDialects and has a mapping for each Catalyst type to database type and vice versa. For example here is MsSqlServerDialect which is used when we work against SQL Server or Azure DWH. In the method getJDBCType you can see the mapping:
case BinaryType => Some(JdbcType("VARBINARY(MAX)", java.sql.Types.VARBINARY))
and this is the root of my problem as I think.
So, I decide to implement my own JdbcDialect to override this behavior:
class SqlServerDialect extends JdbcDialect {
override def canHandle(url: String) : Boolean = url.startsWith("jdbc:sqlserver")
override def getJDBCType(dt: DataType): Option[JdbcType] = dt match {
case BinaryType => Option(JdbcType("BINARY(16)", java.sql.Types.BINARY))
case _ => None
}
}
val dialect = new SqlServerDialect
JdbcDialects.registerDialect(dialect)
With this modification I still catch exactly the same error. It looks like that Spark do not use mapping from my custom dialect. But I checked that the dialect is registered. So it is strange situation.

How to dump the query in Phalcon Framework using Model

$content = Content::findFirst([
'conditions' => 'state = :state: AND URLid = :url: AND city = :city:',
'bind' => [
'state' => $geodata_usstates->statecode,
'url' => $company,
'city' => $geodata_geocity->city
]
]);
I want to dump the query generated for this. If I were using Laravel, I would simply do
$content->toSql();
But here I'm using Phalcon. How can I achieve the same thing in Phalcon?
Query is not available in your model. Query is build based on model using query builder, passed to Query instance and executed against your db connection.
What you could do is use the events manager and read using the db:beforeQuery event
Example here https://forum.phalconphp.com/discussion/18371/check-the-connection-before-querying-into-database
I don't believe you can output the complete query, because it's a prepared query - thus the best you'd get is:
SELECT * FROM `content` WHERE state = ? AND URLid = ? AND city = ? LIMIT 1
Personally, I don't bother trying to log queries in code. I've enabled the query log on my MariaDB server, and just check the log. The query logged is guaranteed to be the query run.

Entity Framework Core: Computed column with persisted values

I'm a little surprised I haven't found any information on the following question, so please excuse if I've missed it somewhere in the docs. Using SQL Server (2016 locally and Azure) and EFCore Code First we're trying to create a computed table column with a persisted value. Creating the column works fine, but I don't have a clue how to persist the value. Here's what we do:
modelBuilder.Entity<SomeClass>(entity =>
{
entity.Property(p => p.Checksum)
.HasComputedColumnSql("(checksum([FirstColumnName], [SecondColumnName]))");
});
And here is what we'd actually like to get in T-SQL:
CREATE TABLE [dbo].[SomeClass]
(
[FirstColumnName] [NVARCHAR](10)
, [SecondColumnName] [NVARCHAR](10)
, [Checksum] AS (CHECKSUM([FirstColumnName], [SecondColumnName])) PERSISTED
);
Can anyone point me in the right direction?
Thanks in advance, Tobi
UPDATE: Based on a good idea by #jeroen-mostert I also tried to just pass the PERSISTED string as part of the formula:
modelBuilder.Entity<SomeClass>(entity =>
{
entity.Property(p => p.Checksum)
.HasComputedColumnSql("(checksum([FirstColumnName], [SecondColumnName]) PERSISTED)");
});
And also outside of the parentheses:
modelBuilder.Entity<SomeClass>(entity =>
{
entity.Property(p => p.Checksum)
.HasComputedColumnSql("(checksum([FirstColumnName], [SecondColumnName])) PERSISTED");
});
However und somehow surprisingly, the computed column is still generated with Is Persisted = No, so the PERSISTED string simply seems to be ignored.
Starting with EF Core 5, the HasComputedColumnSql method has a new optional parameter bool? stored to specify that the column should be persisted:
modelBuilder.Entity<SomeClass>()
.Property(p => p.Checksum)
.HasComputedColumnSql("checksum([FirstColumnName], [SecondColumnName])", stored: true);
After doing some reading and some tests, I ended up trying the PERSISTED inside the SQL query and it worked.
entity.Property(e => e.Duration_ms)
.HasComputedColumnSql("DATEDIFF(MILLISECOND, 0, duration) PERSISTED");
The generated migration was the following:
migrationBuilder.AddColumn<long>(
name: "duration_ms",
table: "MyTable",
nullable: true,
computedColumnSql: "DATEDIFF(MILLISECOND, 0, duration) PERSISTED");
To check on the database whether it is actually persisted I ran the following:
select is_persisted, name from sys.computed_columns where is_persisted = 1
and the column that I've created is there.
" You may also specify that a computed column be stored (sometimes called persisted), meaning that it is computed on every update of the row, and is stored on disk alongside regular columns:"
modelBuilder.Entity<SomeClass>(entity =>
{
entity.Property(p => p.Checksum)
.HasComputedColumnSql("(checksum([FirstColumnName], [SecondColumnName]), stored: true);
});
This is taken (and slightly modified) from Microsoft Docs.: https://learn.microsoft.com/en-us/ef/core/modeling/generated-properties?tabs=data-annotations#computed-columns

IdentityServer4 Sample with ASP Identity with real SQL Server

I have been struggling to get the final SAMPLE (ASP.Net, EF Core, SQL) to work against a real SQL Server. Every sample I can find does not use real SQL they always opt for in-memory data store
I changed the connection string
"Data Source=.;Initial Catalog=IS4;Integrated Security=True;"
and ran
dotnet ef database update -c ApplicationDbContext
This created me a SQL database with 25 tables.
I tweaked Startup.cs to change
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(connectionString));
and b.UseSqlite to b.UseSqlServer
.AddConfigurationStore(options =>
{
options.ConfigureDbContext = b =>
b.UseSqlServer(connectionString,
sql => sql.MigrationsAssembly(migrationsAssembly));
})
// this adds the operational data from DB (codes, tokens, consents)
.AddOperationalStore(options =>
{
options.ConfigureDbContext = b =>
b.UseSqlServer(connectionString,
sql => sql.MigrationsAssembly(migrationsAssembly));
// this enables automatic token cleanup. this is optional.
options.EnableTokenCleanup = true;
// options.TokenCleanupInterval = 15;
});
I ran the server with "/seed" on the command line but the Seed functionality doesn't work
First it complains CLIENT can't have a NULL ID when it calls SaveChanges(). If I change the code to add the ID
if (!context.Clients.Any())
{
Console.WriteLine("Clients being populated");
int i = 1;
foreach (var client in Config.GetClients().ToList())
{
var x = client.ToEntity();
x.Id = i++;
context.Clients.Add(x);
}
context.SaveChanges();
}
else
{
Console.WriteLine("Clients already populated");
}
I then get
"Cannot insert the value NULL into column 'Id', table 'IS4.dbo.ClientGrantTypes".
When I watch the video's it says it can be migrated from SQLite to full SQL simply by changing the connection string which is obviously not true, given all the other changes I have done, so I must be doing (or missing) something else.
Any thoughts?
Could it be that all the tables with an "Id INT" column should all be IDENTITY columns and they are not!
I checked the migrations code and it has
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable(
name: "ApiResources",
columns: table => new
{
Id = table.Column<int>(nullable: false)
.Annotation("Sqlite:Autoincrement", true),
Description = table.Column<string>(maxLength: 1000, nullable: true),
DisplayName = table.Column<string>(maxLength: 200, nullable: true),
I am guessing
.Annotation("Sqlite:Autoincrement", true),
doesn't work with full SQL and therefore all the tables need identity properties setting.
Interestingly if you run the other template to add the AdminUI
dotnet new is4admin
It seems to add a couple of SQL scripts
CREATE TABLE "Clients" (
"Id" INTEGER NOT NULL CONSTRAINT "PK_Clients" PRIMARY KEY AUTOINCREMENT,
"AbsoluteRefreshTokenLifetime" INTEGER NOT NULL,
"AccessTokenLifetime" INTEGER NOT NULL,
which does make them identity columns.
I was faced with this issue today and did a couple of searches online and stumbled upon this https://entityframeworkcore.com/knowledge-base/46587067/ef-core---do-sqlserver-migrations-apply-to-sqlite-
The link pointed out to switch the annotation portion in the migration class UP method after
Id = table.Column(nullable: false)
from
.Annotation("Sqlite:Autoincrement", true);
to
.Annotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn)
And you will need to import
using Microsoft.EntityFrameworkCore.Metadata;
Then you build, and the migration will be successful.
To resolve this particular issue I used SSMS.
right click on table
select script to drop and create
add IDENTITY after the NOT NULL
Execute
However you are correct, it is using sqlite annotations in the sql file and in the migrations.
To fully resolve this issue, you need to create an implementation of all 3 necessary database contexts: identity, persisted grant, and configuration.
That requires an implementation of design time factories for each of those contexts as well.
Then you can run add-migration in the package manager console for each of those contexts, and then run update database, or run the application with the migrate function when seeding.
So to recap:
Create implementations for the 3 db contexts
Create Design time factory implementations for those db contexts
Add the migrations
Update the database with those migrations

Fluent NHibernate many-to-many using non-PK column

I need to relate some tables using non-PK columns in two of them. The tables and FKs are set up as follows:
So a Message can be assigned to many Users and Groups via the MessageUserGroups table. Additionally, a User can be assigned to many Groups via the UserGroups table.
The AdGuid field (a uniqueidentifier) in Users and Groups is not the primary key for legacy reasons. However, I would like to use the UserId and GroupId fields in MessageUserGroups to relate Messages to Users and Groups through their AdGuid fields.
Additionally, I would like to have a PostedMessages property on User which relates Message.AuthorId to User.AdGuid, and have Messages properties on User and Group which relates to Messages through MessageUserGroups.
How do I explain this to Fluent NHibernate?
Currently the mappings look like:
public MessageMap()
{
Id(m => m.Id);
References(m => m.Author).Nullable().ForeignKey("FK_Messages_Author").Column("AuthorId").Fetch.Join().PropertyRef(u => u.AdGuid);
HasManyToMany<users.user>(m => m.Users)
.Cascade.All()
.ParentKeyColumn("MessageId")
.ChildKeyColumn("UserId")
.Table("MessageUserGroups")
.FetchType.Join();
HasManyToMany<users.group>(m => m.Groups)
.Cascade.All()
.ParentKeyColumn("MessageId")
.ChildKeyColumn("GroupId")
.Table("MessageUserGroups")
.FetchType.Join();
}
public UserMap()
{
Id(u => u.Id);
Map(u => u.AdGuid);
HasManyToMany<staff.Message>(u => u.Messages)
.Cascade.All()
.ParentKeyColumn("AdGuid")
.ChildKeyColumn("UserId")
.Inverse()
.LazyLoad()
.Table("MessageUserGroup")
HasMany<staff.Message>(u => u.PostedMessages)
.Cascade.All()
.KeyColumn("AuthorId")
.Inverse()
.LazyLoad()
.Table("Messages")
HasManyToMany<Group>(u => u.Groups)
.Cascade.All()
.Not.LazyLoad()
.Cascade.All()
.ParentKeyColumn("UserID")
.ChildKeyColumn("GroupID")
.Table("UserGroups")
}
public GroupMap()
{
Id(g => g.Id);
Map(g => g.AdGuid);
HasManyToMany<staff.Message>(g => g.Messages)
.Cascade.All()
.ParentKeyColumn("AdGuid")
.ChildKeyColumn("GroupId")
.Inverse()
.LazyLoad()
.Table("MessageUserGroup")
HasManyToMany<User>(g => g.Users)
.Inverse()
.Cascade.All()
.ParentKeyColumn("GroupId")
.ChildKeyColumn("UserId")
.Table("UserGroups")
}
A little lengthy, I know. I can successfully query for Messages using NH. However, when lazy-loading the Messages properties of User or Group, NH generates this SQL (extra properties removed for brevity):
SELECT users0_.MessageId as MessageId1_, users0_.UserId as UserId1_, user1_.Id
as id38_0_, user1_.AdGuid as guid38_0_ FROM MessageUserGroups users0_
LEFT OUTER JOIN Users user1_ on users0_.UserId=user1_.Id WHERE users0_.MessageId=#p0
That join is invalid because it's trying to compare the uniqueidentifer MessageUserGroups.UserId against the bigint User.Id.
Likewise, when lazy-loading User.PostedMessages, this SQL is generated (again, abbreviated):
SELECT postedmess0_.AuthorId as AuthorId1_, postedmess0_.Id as Id1_, postedmess0_.Id as Id43_0_, postedmess0_.AuthorId as AuthorId43_0_
FROM Staff.Messages postedmess0_
WHERE postedmess0_.AuthorId=#p0
And #p0 is set to 89, which is the Id of the user I am testing with, but AuthorId needs to be a uniqueidentifier.
After both of those I get an SqlException: Operand type clash: uniqueidentifier is incompatible with bigint which is to be expected.
It looks like (Fluent) NHibernate always wants to join on the PKs, even though the docs suggest that specifying the ParentKeyColumn and ChildKeyColumn should allow me to specify any columns I like. Is that really possible? Am I misreading?
ParentKeyColumn and ChildKeyColumn specify the column names in the link table between and not the columns of the entity-tables joined. What you need is PropertyRef and ChildPropertyRef on the hasmanytomany to specify the properties to join to

Resources