I want to do inserts in a sql server db with an asp.net core api.
The data I get includes 9 values and 4 of them are connected to other tables. Is it better to simply try the insert via ef core and catch the sql exception if some values are not in the tables or is it better to look for that before (what means more querys in one api request)?
If the data is invalid I only do one insert in another table.
The percentage of invalid data is about 5%.
You should check if the data from the other table already exists or not.
Also, depending on you DB models configuration, you might end up with duplicate items if you try to insert items that exist, without retrieving and updating them.
Related
When using the Generic Database Connector in Mule is there a way that when you insert a new record in to a database, the ID (or entire record) is returned back to the flow as the payload?
My use case is fairly common in that I'm exposing an API which results in a record being generated in a database. The response to that API call needs to include the ID of the record created.
I've looked through the Mule documentation and haven't found anything. Hopefully I'm overlooking something very obvious here...
Instead of generating an ID in database and later retrieving it, I'd recommend generating a UUID inside your mule application using java.util.Random and using that to insert record in db. This way you already have the ID and no need to get it from db.
Plain insert query should just give you the count of rows inserted and not the record itself.
You can write a stored procedure that gets inserts the record and then returns as output. Then call this stored procedure from Mule Database connector and you will have output of stored procedure as the payload.
I have a legacy Access 97 Frontend application which utilises a SQL Server 2005 backend over a SQL Server ODBC Driver (Connection), we use the Linked Table feature on this setup.
I create, amend and link in tables on a daily basis and I am aware of the conversions that occur between the different data types.
There seems to be an issue with one table that I recently created, it has exactly the same setup and permissions as many of the other tables in the database but once I link it into Access 97 it seems to show #NAME in all columns and I also receive an 'ODBC Call Failed' error.
If I remove the Primary Key from the table and do not select a 'Unique Record Identifier' then I am able to view the data in the table but I obviously can't edit it.
There are 3 columns which are VARCHAR's and are over 255, if I reduce these columns to 255 or less I am then able to view the data in the table but if I then try to edit or delete the data I receive a new error 'The Microsoft Jet Database engine stopped the process because you and another user are attempting to change the same data at the same time' - I know this is not possible because at present I am the only one with access to the table.
In this particular table there are 146 columns, if I delete half of these then the table starts to work as it should, again I have tables that have far more columns than this and work perfectly.
Troubleshooting issues like this can be frustrating for sure.
I have found this article very helpful for my linked tables:
Optimizing Microsoft Office Access Applications Linked to SQL Server
Specifically read the section titled Supporting Concurrency Checks.
One thing you might try is adding a "timestamp" column to the table in question.
I have a database in which two tables have a 1:1 relationship using foreign keys. Table one is called Manifest and table two is called Inventory. When an inventory record is added using the application this is built for it uses a foreign key to reference the matching record in the manifest table. In addition, this causes an update to a column in the manifest table for the matching record called Received (datatype: BIT) to 1. This is used for reconciliation and reporting purposes.
Now here is where it gets tricky: This database is synchronized to a server database using Sync Framework in a client-server relationship. The Manifest table is synchronized in one direction from server to client, and the Inventory table is synchronized from client to server. Because of this the "received" column in the Manifest table is not always updated accurately on the server-side after a sync.
I was thinking of creating a stored procedure to perform this update, but I'm a bit rusty on my SQL (and T-SQL). The SP I was thinking of using would use a CURSOR to locate any records in the inventory table where the foreign key is NOT NULL (this is allowed due to exceptions where we receive something that was not in the manifest). The cursor would then allow me to iterate though all the records to locate the matching record in the manifest table and update the "received" column. I know that this cannot be the best way to perform this update. Can anyone suggest another way of doing this that would be faster and use less resources? Examples would be appreciated =)
I've got a database (SQL Server) and a live website accessing that database through linq to sql classes. Now, I need to add a column (allowing null values) to one of the tables in that database.
If I add the column to the database first, and update the linq to sql classes afterward, will the old linq to sql data classes cease to work (since the database schema is different)? The last thing I want is for the website to crash as I update the database.
If doing what I described does cause a problem, what's the best way to do this?
Adding a new column (allows null) to the database schema without refreshing the LinqToSql model will not cause any problems.
LinqToSQL generate parameterized queries on the runtime which will not affected by the new column as soon as it allows null for the insert statement.
I have a form with data. Any changes or insertion , those data should be updated in tow different tables like name, salary in one table and address, mail id in another table.
Like the example above i have several columns in both tables.
Now i want to audit the table. So i think i have to create a view for the two tables and set up a trigger for the view. Is it correct?.
And also i need to know only the affected columns. How to get the only affected columns?
Please suggest me a solution.
Thanks!!
There are lots of ways to let the system handle all that grunt work for you - depending on the SQL Server version you're using:
How to: Use SQL Server Change Tracking (as of SQL Server 2008)
Introduction to SQL Server change tracking
Understanding SQL Server Audit (as of SQL Server 2008 R2)
Articles for SQL Server Auditing (various versions)
If you really must handle all the work yourself, you need to get familiarized with triggers - read up on them in Data Points: Exploring SQL Server Triggers.
Inside your trigger code, you have two "pseudo-tables":
Inserted is the table holding the values being inserted (in an INSERT trigger) or the new values (in an UPDATE trigger)
Deleted is the table holding the values being deleted (in a DELETE trigger) or the old values (in an UPDATE trigger)
With those two pseudo-tables, you can get access to all data you might need.