Python+peewee: retrieve a field after Model.save() (and trigger execution in Sqlite database) - database

In my application I work with Sqlite. In one of the tables inside database I've implemented a trigger (basically, after an insert event on the table TAB, it has to update a column named codecolumn which depends on the ID PK field)
In my code I create and object from a PeeweeModel previously setted
objfromModel = Model(params....)
After the execution of line:
objfromModel.save()
We hoped to get appart from the _id field generated -in fact objfromModel.id is retrieved from DB-, but also the codecolumn new field generated by the trigger execution on insert event. But objfromModel.codecolumn is None
Question: is there a trick to make on Peewee in order to recover this new field generated in database by trigger.

Unfortunately SQLite does not support the concept of INSERT ... RETURNING. What you could do is a couple of things:
A. After creation simply re-fetch the codecolumn. e.g. self.codecolumn = MyModel.select(MyModel.codecolumn).where(MyModel.id == self.id).scalar(convert=True) (the use of "scalar" says return just one value, the "convert=True" says convert the underyling database type to a Python type. This is really only necessary if the database type is a date or datetime
B. Create a post-insert trigger that calls a user-defined function. Register a handler for the user-defined function on your database instance, and have your callback receive the new codecolumn value and set it as a database attribute in the callback. Hopefully this makes sense?
C. Move the codecolumn trigger out of SQL and into Python, making it easier to know ahead-of-time what its value will be. This depends obviously on what that column contains.
Hope these ideas help.

Related

Pass an entire Row as parameter to User-Defined Table Function in Flink Table API

How can I pass an entire Row to my ScalarFunction RowToTupleConverter in the following code? All the examples only address passing single or multiple values by name, but I want the whole result of the SELECT statement to be passed as a Row. My guess was using *, but that's not recognized as a valid parameter.
envT.registerFunction("toTuple", new RowToTupleConverter());
envT.createTemporaryView("t", envT.fromDataStream(ds));
Table result = envT.from("t").select("getAvroFieldString(f1, 'HASH_KEY') as hk,
getAvroFieldLong(f1, 'LOAD_DATE') as ld, 'test' as NAME");
envT.toAppendStream(result.select("*").map("toTuple(*)"), new TupleTypeInfo[...]).print();
I do not want to address the individual fields but a whole row, since I'm building up everything generic, thus my ScalarFunction requires a parameter of type Row. The function iterates through the row and creates a Tuple2<GenericRecord,GenericRecord>> from the values of the row.
Background:
The job is built up like this, because we need both key and value from a Kafka Source using the Confluent Schema Registry, and the job should be generic to allow for an arbitrary schema, allowing multiple instantiations without changing the codebase. The only way we found to achieve this, is creating a DataStream from a FlinkKafkaConsumer, where Tuple2 includes the key and the value of a message each in an instance of GenericRecord, and transforming this to a Flink table.
Since GenericRecord is a blackbox to the Table API, I followed recommendations in another thread and created simple ScalarFunctions, which extract the specific values I need. Right now that part is still hardcoded, but once everything works, it will also be generic. However, I'm struggling to wrap the result table back to a Tuple2, in order to write the transformed records back to another Kafka Topic, which is why I introduced another ScalarFunction to map from a Row to a Tuple2<GenericRecord,GenericRecord>>.
Is this possible and if so, how? If not, what kind of workaround could I use to solve this problem? I'd also appreciate suggestions for a more elegant way in general, but judging from the amount of research I did into that direction and due to the nature of the use case, I doubt there is. Unfortunately, moving to SpecificRecord is not an option.

SQL Server: How to list changed columns with change tracking?

I use SQL Server 2012 Standard edition, and I activated Change Tracking function on a table.
When I list changes on a table with the CHANGETABLE function, I have a SYS_CHANGE_COLUMNS property with binary data
0x0000000045000000460000004700000048000000
How do I know which columns have changed ?
Because the column is a bitmask made up of the column IDs of all the columns which were changed, it's difficult to know what it's made up of. In fact, MSDN says not to interrogate SYS_CHANGE_COLUMNS directly here: https://msdn.microsoft.com/en-us/library/bb934145.aspx
This binary value should not be interpreted directly.
However, when you are detecting changes for notification purposes, usually the notification consumer has a good idea of which columns they are interested in changing.
For this use-case, use the CHANGE_TRACKING_IS_COLUMN_IN_MASK function.
-- Get the column ID of my column
declare #MyColumnId int
set #MyColumnId = columnproperty(object_id('MyTable'), 'MyColumn', 'ColumnId')
-- Check if it's changed
declare #MyColumnHasChanged bit
set #MyColumnHasChanged = CHANGE_TRACKING_IS_COLUMN_IN_MASK (MyColumnId, #change_columns_bitmask);
If CHANGE_TRACKING_IS_COLUMN_IN_MASK tell me if a column has changed,
how can I write a script that tell me which columns have changed ? I
have around 50 attributes for each table.
I'm afraid you'll need to loop through all of the columns you may be interested in... If this is too restrictive, you may have to use another change-notification approach, like Change Data Capture (CDC), or Triggers

Using Effort (EF Testing Tool) with Computed Column

I have a number of integration tests which access the DB directly - create test prerequisite objects - performs the tests and then cleans up afterwards - however I wonted to try out the same approach in-memory.
I have just used Effort in my project and it works very easily. However I've hit a problem that I have been trying - but unable to solve.
One of the tables that I need filled up with dummy data - as a test prerequisite - contains a computed column (nvarchar, not null). For the scope of the test I really don't care about that column's value - but even if I try to insert dummy data, my data is ignored and then I get hit with an error:
"Column 'x' cannot be null. Error code: GenericError"
In my tests I am using the same edmx file as is used by the actual code. This prevents me from constantly updating the edmx copy.
Is there a way in which I can force the test to update the edmx (at runtime) so that column is a nullable non-computed column? [overriding OnModelCreating] or is there way to insert a default value (anything goes for this column) to stop this error? [overriding SaveChanges]
I have currently tried the following:
Attaching the objects using .Attach() instead of .Add()
Setting the EntityState to Unchanged after adding
Forcing the value through Entry.OriginalValues [this values since entity is in Added state]
Edit:
I have tried overriding the OnModelCreating method, but to no avail since this is DB-First.
modelBuilder.Entity<Entity_Name>().Property(p => p.x).IsOptional().HasDatabaseGeneratedOption(System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedOption.None);
Open your EDMX file in XML editor, find your entity under the StorageModels,
and add to the column definition StoreGeneratedPattern="Computed".
But if you update or delete and add that table you will loose this modification. Actually you can write console app that will update you edmx file and add StoreGeneratedPattern="Computed" where needed and you can add those app to prebuild events in studio.
The reason of the problem was a bug in the Effort database. When the computed column is based on non-nullable columns, the computed column would also automatically become non-nullable. Therefor, the Effort database was expecting a non-null value. With the latest update the problem is resolved. You have to set the global EntityFrameworkEffortManager.UseDefaultForNotNullable flag as true.
See the issue on github

Entity Framework and SQL Server OUTPUT clause

I'd like to use SQL OUTPUT clause to keep history of the records on my database while I'm using Entity Framework. To achieve this, EF needs to generate the following example for a DELETE statement.
Delete From table1
output deleted.*, 'user name', getdate() into table1_hist
Where field = 1234;
The table table1_hist has the same columns as table1, with the addition of two columns to store the name of the user who did the action and when it happened. However, EF doesn't seem to have a way to support this SQL Server's clause, so I'm lost on how to implement that.
I looked at EF's source code, and the DELETE command is create inside a internal static method (GenerateDeleteSql in System.Data.Entity.SqlServer.SqlGen.DmlSqlGenerator class), so I can't extend the class to add the behavior I want. It looks like I'll have to rewrite the SQL Server provider based on the existing code, but that is something I'd like to avoid...
So, my question is if there's another option to do this (an extension, for example) or do I have to rewrite this provider?
Thank you.
Have you considered either
Using Stored Procedures to encapsulate your data logic
A delete trigger to capture the data
Change Data Capture (Enterprise edition only)
not actually deleting the data - merely setting a flag in the data to mark it as deleted.

Dynamic SQL statement return value using the current target connection

I'm currently creating my first real life project in Pervasive. The task is to map a certain XML structure containing orders (as in shops and products) to 3 tables I created myself. These tables rest inside a MS-SQL-Server instance.
All of the tables have a unique key called "id", an automatically incremented column. I've dropped this column from all mappings so that Pervasive will not try to fill it itself.
For certain calculations, for a split key in one of the tables and for references to the created records in other tables, I will need the id that the database has just created. For that, I have googled the answer. I can use "select ##identity;" as a statement, and this returns the id that has most recently been created for the current connection. This means that in Pervasive, I will have to execute this statement using the already existing target connection object.
But how to do that? I am quite sure that I will need a JDImport or DJExport object, but how to get one associated with the current connection that Pervasive inserts the records by?
Or is there any other way to handle this auto increment when I need to reference the id in other tables?
Not sure how things work in Pervasive, but you may run into issues with ##identity,. Scope_identity() would probably be safer but may still not work in Pervasive.
Hopefully your tables have a natural key in addition to the generated id, in which case you can select your id based on the natural key. This will avoid any issues you may have with disparate sessions and scope.
If there is anyone looking this post up and wonders about the answer, it's "You can't". Pervasive does not allow access to their very own connection object, the one they use to query the database. Without access to it, you cannot guaranteed fetch the right id. The solution for us was this: We used a stored procedure which we called in the Before-Transformation event that created the header record and returned the id and an optional error message as a table. We executed it and it returns the id we then save and use throughout our mapping.

Resources