Adding a column to a table in SQLite - database

I've got a table in SQLite, and it already has many rows stored in it. I know realise I need another column in the table. Up to now I've just deleted the database and started again because the data has just been test data. But now the data in the database can't be deleted.
I know the query to add a column to the table, my question is what is a good way to do this so that it works for both existing users and new users? (I have updated the CREATE query I have for when the table is not found (because it's a new user or an existing user has cleared the database). It seems wrong to have an ALTER query in software that ships, and check every time. Is there some way of telling SQLite to automatically add the column if it doesn't exist during the UPDATE query I now need?
If I discover I need more columns in the future, is having a bunch of ALTER statements on startup (or somewhere?) really the best way to do it?
(If relevant this is for a node js app)

I'd just throw a table somewhere that marks what version of your database it is, and check that to determine if an update is needed. Either that or if you have a table already where there's always going to be just one record in it add a new field 'DatabaseVersion' to it.
So for example if you check the version number, and find it's a version 1 database when the newest version should be version 3, you know which updates to perform on it.

You can use PRAGMA user_version to store the version number of the database and check if the database needs to be updated.

Related

Linked SQL Server's table shows all fields as #Deleted, but when converted to local, all information is there

My company has a really old Access 2003 .ADP front-end connected to an on-premise SQL Server. I was trying to update the front-end to MS Access 2016, which is what we're transitioning to, but when linking the tables I get all the fields in this specific table as #Deleted. I've looked around and tried to change some of the settings, but I'm really not that into SQL Server to know what I'm doing, hence asking for help.
When converting the table to local, all the info is correctly displayed, so it begs the question. Also, skipping to the last record will reveal the info on that record, or sorting/filtering reveals some of the records, but most of the table stays "#Deleted"...
Since I know you're going to ask: Yes, I need to edit the records.. Although the snapshot method would work for people trying to view the info, some of us need to edit it.
I'm hoping someone can shed some light on this,
Thanks in advance, Rafael.
There are 3 common reasons for this:
You have bit fields in SQL server, but they are null. They should be assigned a default of 0.
The table in question does NOT have a PK (primary key).
Last but not least you need (want) to add a timestamp column. Keep in mind that this is really what we call a “row version” column (so it not a date/time column, but a timestamp column). Adding this column will help access determine if a record been changed, and this is especially the case for any table/form in Access that allows editing of “real” number data types (single, double). If access does not find a timestamp column, then it reverts to a column by column comparison to determine table changes, and due to how computers handle “real” numbers (with rounding), then such comparisons often fail.
So, check for the above 3 issues. You likely should re-run the linked table manager have making any changes.

Get column creation date?

i am using oracle 11. I need to find when specific column was created. I know we can find out last DDL change date but first i created the column
and after some days created index on one of the column of same table . So now, I need to find when that specific column was created .
Is there a way ?
This depends on your audit settings if the object was being audited you may find it in audit trail. I'd suggest reading
http://docs.oracle.com/cd/B28359_01/server.111/b28337/tdpsg_auditing.htm
Or you can use LogMiner to check redo logs if you DB was running in ARCHIVELOG mode. But I have never used this so I'm not sure about all the requirements there.

SSIS no-match lookup? SQL server integration services - prevent duplicate rows

In ssis 2012, let's presume I simply copy customer data from one DB Source to a DB Destination (both are different database instances, one cannot "see" the other).
How do I prevent adding customer data I already added before. In other words, when I rerun the task, it should not add the customer twice or more (only the ones that previously failed). We have a non-unique reference available in the destination customer table e.g. 'SourceCustomerID' which is non-unique!
So we cannot rely on some unique index in the Destination table(s), and if we could, I don't want go this way (would cause failures)...
Added based on questions below: there ARE columns that uniquely identify data in the target table, and we need these for this, but these are nor implemented as unique indexes, nor do I want to let the job (or rows) fail like this. I want to prevent adding these rows in a controlled way.
I tried the lookup component, playing with "Lookup No Match Output", etc...no luck yet.
Any ideas how to accomplish this using the SSIS principles??
Best regards
Bart.
Use the SCD component
https://msdn.microsoft.com/en-us/library/ms141715.aspx
You map the business key which will check for existing record and you can insert/update. You can alter it to insert only.

VS SchemaCompare: Making Table Updates

Does anyone know how the SchemaCompare in Visual Studio (using 2010 currently) determines how to handle [SQL Server 2008R2] database table updates (column data type, optionality, etc)?
The options are to:
Use separate ALTER TABLE statements
Create a new table, copy the old data into the new table, rename the old table before the new one can be renamed to assume the proper name
I'm asking because we have a situation involving a TIMESTAMP column (for optimistic locking). If SchemaCompare uses the new table approach, the TIMESTAMP column values will change & cause problems for anyone with the old TIMESTAMP values.
I believe Schema Compare employs the same CREATE-COPY-DROP-RENAME (CCDR) strategy as VSTSDB described here: link
Should be able to confirm this by running a compare and scripting out the deploy, no?

Dynamic SQL statement return value using the current target connection

I'm currently creating my first real life project in Pervasive. The task is to map a certain XML structure containing orders (as in shops and products) to 3 tables I created myself. These tables rest inside a MS-SQL-Server instance.
All of the tables have a unique key called "id", an automatically incremented column. I've dropped this column from all mappings so that Pervasive will not try to fill it itself.
For certain calculations, for a split key in one of the tables and for references to the created records in other tables, I will need the id that the database has just created. For that, I have googled the answer. I can use "select ##identity;" as a statement, and this returns the id that has most recently been created for the current connection. This means that in Pervasive, I will have to execute this statement using the already existing target connection object.
But how to do that? I am quite sure that I will need a JDImport or DJExport object, but how to get one associated with the current connection that Pervasive inserts the records by?
Or is there any other way to handle this auto increment when I need to reference the id in other tables?
Not sure how things work in Pervasive, but you may run into issues with ##identity,. Scope_identity() would probably be safer but may still not work in Pervasive.
Hopefully your tables have a natural key in addition to the generated id, in which case you can select your id based on the natural key. This will avoid any issues you may have with disparate sessions and scope.
If there is anyone looking this post up and wonders about the answer, it's "You can't". Pervasive does not allow access to their very own connection object, the one they use to query the database. Without access to it, you cannot guaranteed fetch the right id. The solution for us was this: We used a stored procedure which we called in the Before-Transformation event that created the header record and returned the id and an optional error message as a table. We executed it and it returns the id we then save and use throughout our mapping.

Resources