Laravel Eloquent how to create UNIQUE constraint with duplicate NULLs - sql-server

I'm usinq Laravel 5 with MS Sql Server 2014.
I want to create a unique constraint but it should allow multiple null values.
Here is code I'm using. Where 'passport_no' has to be unique if not null.
Schema::create('UserProfile', function(Blueprint $table){
$table->increments('userprofile_id');
$table->integer('user_id')->unsigned();
$table->string('passport_no', 50)->unique()->nullable();
$table->foreign('user_id')->references('id')->on('users')
->onUpdate('cascade')->onDelete('cascade');
});

This is an ancient question, but sill needs answering. As stated above, SQL Server from 2008, including Azure SQL, supports a special index that will work around it. In your database migration you can check the driver used and substitute the database builder standard SQL with an MSSQL-specific statement.
This migration example is for Laravel 5+, and creates a users table with a unique, but nullable, api_token column:
public function up()
{
Schema::create('users', function (Blueprint $table) {
$table->bigIncrements('id');
$table->timestamps();
$table->string('name', 100)->nullable()->default(null);
// etc.
$table->string('api_token', 80)->nullable()->default(null);
if (DB::getDriverName() !== 'sqlsrv') {
$table->unique('api_token', 'users_api_token_unique');
}
});
if (DB::getDriverName() === 'sqlsrv') {
DB::statement('CREATE UNIQUE INDEX users_api_token_unique'
. ' ON users (api_token)'
. ' WHERE api_token IS NOT NULL');
}
}

you can use a unique Index and in its filter set your condition like
passport_no is not null
in this way you can solve your problem

Related

Add column to all tables through migration in Laravel

As our project scaled we decided that every single data should belong to companies that created them. Therefore I'm to add a column "data_owner_company_id" that points to the company that owns given record. Yes it's possible to generate migration to add this column to each model but that is not really feasible since there is 120+ tables & models. How can i tackle this with minimum effort ?
For the model part i figured i can easily apply it to all models by inheritance, but not sure about migration.
TL;DR
How to add int column to all tables by migration ?
Database: MySQL v8
Framework: Laravel 8, PHP 7.3
It's simple if you find all the tables' names in your database, you have to loop and create columns for each and every table.
Try creating columns using queues as it will be a heavy job for 120 tables.
Check the following code:
class CreateDataOwnerCompanyIdtoEachTable extends Migration
{
/**
* Run the migrations.
*
* #return void
*/
public function up ()
{
$columns = 'Tables_in_' . env('DB_DATABASE');//This is just to read the object by its key, DB_DATABASE is database name.
$tables = DB::select('SHOW TABLES');
foreach ( $tables as $table ) {
//todo add it to laravel jobs, process it will queue as it will take time.
Schema::table($table->$columns, function (Blueprint $table) {
$table->unsignedInteger('data_owner_company_id');
});
}
}
/**
* Reverse the migrations.
*
* #return void
*/
public function down ()
{
$columns = 'Tables_in_' . env('DB_DATABASE');//This is just to read the object by its key, DB_DATABASE is database name.
$tables = DB::select('SHOW TABLES');
foreach ( $tables as $table ) {
//todo add it to laravel jobs, process it will queue as it will take time.
Schema::table($table->$columns, function (Blueprint $table) {
$table->dropColumn('data_owner_company_id');
});
}
}
}
I'm not 100% sure that it's going to work, but here it goes:
Create class that extends Illuminate\Database\Schema\Blueprint;
In constructor call parent construntor and then
$this->unsignedBigInteger('data_owner_company_id')->nullable();
Use your new class in migrations instead of default Blueprint

How do i change a column's reference table in laravel 7

I made a mistake when i was building database, so as you will see in codes, my "order_id" column in "baskets" table is referenced from "customers" table but it should have been from "orders" table,
public function up()
{
Schema::create('baskets', function (Blueprint $table) {
$table->id();
$table->unsignedBigInteger('firm_id');
$table->foreign('firm_id')
->references('id')->on('firms')
->onDelete('cascade');
$table->unsignedBigInteger('customer_id');
$table->foreign('customer_id')
->references('id')->on('customers')
->onDelete('cascade');
$table->unsignedBigInteger('order_id');
$table->foreign('order_id')
->references('id')->on('customers')
->onDelete('cascade');
$table->unsignedBigInteger('service_id');
$table->foreign('service_id')
->references('id')->on('services')
->onDelete('cascade');
$table->decimal('price', 11, 2);
$table->integer('quantity')->default(1);
$table->string('note')->nullable();
$table->timestamps();
});
}
So how can i change its reference table from "customers" to "orders" table.
Thank you all in advance.
You should be looking at the dropForeign method (scroll down a little).
This method accepts an $index argument. The documentation already shows how to write this $index. Looking at the source code you can also see how this index is being generated. At some point the createIndexName method is being called during the foreign call.
Looking at your code and the createIndexName method, we could guess the index name that is being created:
baskets_order_id_foreign
To round it up you might be able remove the foreign key by executing the following:
public function up()
{
Schema::table('baskets', function (Blueprint $table) {
$table->dropForeign('baskets_order_id_foreign');
// Add your new foreign key
});
}

How to sync the table of postgresql schema - Sequelize ORM

I have two schemas in my postgres
public // default schema
first_user
Now I have same tables in both schemas
I changed the table structure, so I want to run the sync now,
I sync the tables using:
const db = new Sequelize(postgres_db, postgres_user, postgres_pwd, {
host: postgres_host,
port: 5432,
dialect: 'postgres',
logging: false,
});
db.sync().then(() => {
console.log('Table Synced');
}, (err) => {
console.log(err);
});
After running this my table structure inside the public schema changed successfully, but my first_user schema's table structure remains same.
How to solve this?
NOTE: I don't want to lose my data inside my table.
Finally implemented this using sequelize migrations
http://docs.sequelizejs.com/manual/tutorial/migrations.html
If you can't use Sequelize migration because of lack of Typescript support you can fall back to Migra which is easy to use.
https://djrobstep.com/docs/migra
You Can Try CREATE TABLE AS TABLE Query.
create table first_user.tableName as table public.tableName;
It will create the table with updated table structure as well as with the data.
Thanks..

IdentityServer4 Sample with ASP Identity with real SQL Server

I have been struggling to get the final SAMPLE (ASP.Net, EF Core, SQL) to work against a real SQL Server. Every sample I can find does not use real SQL they always opt for in-memory data store
I changed the connection string
"Data Source=.;Initial Catalog=IS4;Integrated Security=True;"
and ran
dotnet ef database update -c ApplicationDbContext
This created me a SQL database with 25 tables.
I tweaked Startup.cs to change
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(connectionString));
and b.UseSqlite to b.UseSqlServer
.AddConfigurationStore(options =>
{
options.ConfigureDbContext = b =>
b.UseSqlServer(connectionString,
sql => sql.MigrationsAssembly(migrationsAssembly));
})
// this adds the operational data from DB (codes, tokens, consents)
.AddOperationalStore(options =>
{
options.ConfigureDbContext = b =>
b.UseSqlServer(connectionString,
sql => sql.MigrationsAssembly(migrationsAssembly));
// this enables automatic token cleanup. this is optional.
options.EnableTokenCleanup = true;
// options.TokenCleanupInterval = 15;
});
I ran the server with "/seed" on the command line but the Seed functionality doesn't work
First it complains CLIENT can't have a NULL ID when it calls SaveChanges(). If I change the code to add the ID
if (!context.Clients.Any())
{
Console.WriteLine("Clients being populated");
int i = 1;
foreach (var client in Config.GetClients().ToList())
{
var x = client.ToEntity();
x.Id = i++;
context.Clients.Add(x);
}
context.SaveChanges();
}
else
{
Console.WriteLine("Clients already populated");
}
I then get
"Cannot insert the value NULL into column 'Id', table 'IS4.dbo.ClientGrantTypes".
When I watch the video's it says it can be migrated from SQLite to full SQL simply by changing the connection string which is obviously not true, given all the other changes I have done, so I must be doing (or missing) something else.
Any thoughts?
Could it be that all the tables with an "Id INT" column should all be IDENTITY columns and they are not!
I checked the migrations code and it has
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable(
name: "ApiResources",
columns: table => new
{
Id = table.Column<int>(nullable: false)
.Annotation("Sqlite:Autoincrement", true),
Description = table.Column<string>(maxLength: 1000, nullable: true),
DisplayName = table.Column<string>(maxLength: 200, nullable: true),
I am guessing
.Annotation("Sqlite:Autoincrement", true),
doesn't work with full SQL and therefore all the tables need identity properties setting.
Interestingly if you run the other template to add the AdminUI
dotnet new is4admin
It seems to add a couple of SQL scripts
CREATE TABLE "Clients" (
"Id" INTEGER NOT NULL CONSTRAINT "PK_Clients" PRIMARY KEY AUTOINCREMENT,
"AbsoluteRefreshTokenLifetime" INTEGER NOT NULL,
"AccessTokenLifetime" INTEGER NOT NULL,
which does make them identity columns.
I was faced with this issue today and did a couple of searches online and stumbled upon this https://entityframeworkcore.com/knowledge-base/46587067/ef-core---do-sqlserver-migrations-apply-to-sqlite-
The link pointed out to switch the annotation portion in the migration class UP method after
Id = table.Column(nullable: false)
from
.Annotation("Sqlite:Autoincrement", true);
to
.Annotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn)
And you will need to import
using Microsoft.EntityFrameworkCore.Metadata;
Then you build, and the migration will be successful.
To resolve this particular issue I used SSMS.
right click on table
select script to drop and create
add IDENTITY after the NOT NULL
Execute
However you are correct, it is using sqlite annotations in the sql file and in the migrations.
To fully resolve this issue, you need to create an implementation of all 3 necessary database contexts: identity, persisted grant, and configuration.
That requires an implementation of design time factories for each of those contexts as well.
Then you can run add-migration in the package manager console for each of those contexts, and then run update database, or run the application with the migrate function when seeding.
So to recap:
Create implementations for the 3 db contexts
Create Design time factory implementations for those db contexts
Add the migrations
Update the database with those migrations

Audit trail with Entity Framework Core

I have an ASP.NET core 2.0 using Entity Framework core on a SQL Server db.
I have to trace and audit all the stuff made by the users on the data. My goal is to have an automatic mechanism writing all what is happening.
For example, if I have the table Animals, I want a parallele table "Audit_animals" where you can find all the info about the data, the operation type (add, delete, edit) and the user who made this.
I already made this time ago in Django + MySQL, but now the environment is different. I found this and it seems interesting, but I'd like to know if there are better ways and which is the best approach to do this in EF Core.
UPDATE
I'm trying this and something happens, but I have some problems.
I added this:
services.AddMvc().AddJsonOptions(options => {
options.SerializerSettings.ReferenceLoopHandling = ReferenceLoopHandling.Ignore;
});
public Mydb_Context(DbContextOptions<isMultiPayOnLine_Context> options) : base(options)
{
Audit.EntityFramework.Configuration.Setup()
.ForContext<Mydb_Context>(config => config
.IncludeEntityObjects()
.AuditEventType("Mydb_Context:Mydb"))
.UseOptOut()
}
public MyRepository(Mydb_Context context)
{
_context = context;
_context.AddAuditCustomField("UserName", "pippo");
}
I also created a table to insert the audits (only one to test this tool), but the only thing I got is what you see in the image. A list of json files with the data I created.... why??
Read the documentation:
Event Output
To configure the output persistence mechanism please see Configuration and Data Providers sections.
Then, in the documentation on Configuration:
If you don't specify a Data Provider, a default FileDataProvider will be used to write the events as .json files into the current working directory. (emphasis mine)
Long and short, follow the documentation to configure the data provider you'd like to use.
If you are going to map the audit table (Audit_Animals) to the same EF context as the audited Animals table, you can use the EntityFramework Data Provider included on the same Audit.EntityFramework library.
Check the documentation here:
Entity Framework Data Provider
If you plan to store the audit logs in
the same database as the audited entities, you can use the
EntityFrameworkDataProvider. Use this if you plan to store the audit
trails for each entity type in a table with similar structure.
There is another library that can audit EF contexts in a similar way, take a look: zzzprojects/EntityFramework-Plus.
Cannot recommend one over the other since they provide different features (and I'm the owner of the audit.net library).
Update:
.NET 6 and Entity Framework Core 6.0 supports SQL Server temporal tables out of the box.
See this answer for examples:
https://stackoverflow.com/a/70017768/3850405
Original:
You could have a look at Temporal tables (system-versioned temporal tables) if you are using SQL Server 2016< or Azure SQL.
https://learn.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables?view=sql-server-ver15
From documentation:
Database feature that brings built-in support for providing
information about data stored in the table at any point in time rather
than only the data that is correct at the current moment in time.
Temporal is a database feature that was introduced in ANSI SQL 2011.
There is currently an open issue to support this out of the box:
https://github.com/dotnet/efcore/issues/4693
There are third party options available today but since they are not from Microsoft it is of course a risk that they won't be supported in future versions.
https://github.com/Adam-Langley/efcore-temporal-query
https://github.com/findulov/EntityFrameworkCore.TemporalTables
I solved it like this:
If you use the included Visual Studio 2019 LocalDB (Microsoft SQL Server 2016 (13.1.4001.0 LocalDB) you will need to upgrade if you use cascading DELETE or UPDATE. This is because Temporal tables with cascading actions is not supported in that version.
Complete guide for upgrading here:
https://stackoverflow.com/a/64210519/3850405
Start by adding a new empty migration. I prefer to use Package Manager Console (PMC):
Add-Migration "Temporal tables"
Should look like this:
public partial class Temporaltables : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
}
protected override void Down(MigrationBuilder migrationBuilder)
{
}
}
Then edit the migration like this:
public partial class Temporaltables : Migration
{
List<string> tablesToUpdate = new List<string>
{
"Images",
"Languages",
"Questions",
"Texts",
"Medias",
};
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.Sql($"CREATE SCHEMA History");
foreach (var table in tablesToUpdate)
{
string alterStatement = $#"ALTER TABLE [{table}] ADD SysStartTime datetime2(0) GENERATED ALWAYS AS ROW START HIDDEN
CONSTRAINT DF_{table}_SysStart DEFAULT GETDATE(), SysEndTime datetime2(0) GENERATED ALWAYS AS ROW END HIDDEN
CONSTRAINT DF_{table}_SysEnd DEFAULT CONVERT(datetime2 (0), '9999-12-31 23:59:59'),
PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = History.[{table}]));";
migrationBuilder.Sql(alterStatement);
}
}
protected override void Down(MigrationBuilder migrationBuilder)
{
foreach (var table in tablesToUpdate)
{
string alterStatement = $#"ALTER TABLE [{table}] SET (SYSTEM_VERSIONING = OFF);";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP PERIOD FOR SYSTEM_TIME";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP DF_{table}_SysStart, DF_{table}_SysEnd";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP COLUMN SysStartTime, COLUMN SysEndTime";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"DROP TABLE History.[{table}]";
migrationBuilder.Sql(alterStatement);
}
migrationBuilder.Sql($"DROP SCHEMA History");
}
}
tablesToUpdate should contain every table you need history for.
Then run Update-Database command.
Original source, a bit modified with escaping tables with square brackets etc:
https://intellitect.com/updating-sql-database-use-temporal-tables-entity-framework-migration/
Testing Create, Update and Delete will then show a complete history.
[HttpGet]
public async Task<ActionResult<string>> Test()
{
var identifier1 = "OATestar123";
var identifier2 = "OATestar12345";
var newQuestion = new Question()
{
Identifier = identifier1
};
_dbContext.Questions.Add(newQuestion);
await _dbContext.SaveChangesAsync();
var question = await _dbContext.Questions.FirstOrDefaultAsync(x => x.Identifier == identifier1);
question.Identifier = identifier2;
await _dbContext.SaveChangesAsync();
question = await _dbContext.Questions.FirstOrDefaultAsync(x => x.Identifier == identifier2);
_dbContext.Entry(question).State = EntityState.Deleted;
await _dbContext.SaveChangesAsync();
return Ok();
}
Tested a few times but the log will look like this:
This solution has a huge advantage IMAO that it is not Object Relational Mapper (ORM) specific and you will even get history if you write plain SQL.
The History tables are also read only by default so less chance of a corrupt audit trail. Error received: Cannot update rows in a temporal history table ''
If you need access to the data you can use your preferred ORM to fetch it or audit via SQL.

Resources