We currently have a database with about 72 custom views and a few stored procedures in a Firebird 2.5 database.
For every single update our software provider wants to initiate (about every 2 weeks), we need to:
Drop the views one by one manually
Extract the 'CREATE' query into a notepad file
Copy and paste each query and execute it in order to recreate the original view.
Hoping there is some faster way to do this, been searching for hours now and can't really find anything in Firebird.
Any help pointing in the right direction is welcome!
Related
I am using SSMS v18.10, but it really relates to any version prior.
If you import a Table using Import flat file the Table is not visible for creation of Views
unless you restart SSMS. You can "Refresh" the Database and or Tables but this does nothing to help table availability for View creation in Design mode. You can save the half edited view and then go Design and 'Add Table' but still not see the newly added table. I find the only way to view the new table is to restart SSMS. If you are half way through some work and have 10, 20, 50 or 100 views and tables open it is grossly counterproductive to have to shut down everything and restart fresh. Does anyone know a way around this? Thankyou for your time.
I am wondering if I can modify a generated script to copy data from 1 database to another. I used the "Generate scripts" tool and can I then take the latest run each time and create a stored procedure to then take latest tables and insert that into new database?
Source tables have an extension of _dateinfo. So table called ABC_05162016 would be on next run table ABC_06162016. Is there a way to take the original generated script and then update that last part 1 time instead of constantly selecting all the tables every time this is needed to be done?
Thanks for the help. Yes I have looked around for an answer to this but have not come across something like this. All deal with importing/exporting data or using Generated Script part. Thanks for the help.
If there is a better way then those 3 ways. I would appreciate knowing that as well.
Using SQL Server 2008.
Our firm does not have a dedicated DBA employed but does have select developers performing DBA functions. We update our database often during a development cycle and have a release script with the various updates. We keep our db schema and objects in Visual Studio in a Database Project.
However, we often encounter two stumbling block problems that causes time-intensive manual intervention:
Developers cannot always sync from the Database Project to their local database because if we have added a NOT NULL field to an existing table that contains data then the Deploy process for VS to the db isn't smart enough to automagically insert "test" data just get the field into the table (unless this is a setting someplace?). We would of course follow this up, if possible, with a script to populate the field with real data, but we can't because the deployment fails.
Sometimes a developer will restore a backup from any past random date. There is no way of knowing exactly which db updates were applied to this database, so they don't know which scripts to start applying. What we do in this case is to check each script, chronologically, to see if the changes from that script have been applied to the database. If so, move on to the next script to run. Repeat.
One method we have discussed is potentially creating a "Database Update Level" table in the database with 1 field, 1 row. It would maintain the level that the database has been updated through. For example, when the first script is run, update the level to 2. In each db script, we would wrap the statements in a check such as
IF Database_Update_Level < 2 THEN
do some things here
UPDATE Database_Update_Level SET Database_Update_Level = 2
END IF
The db scripts can then be run on any database because the individual statement won't execute below a certain level.
This feels like we're missing something because this must be a common problem that every development shop that allows developers to develop locally encounters.
Any insights would be greatly appreciated.
Thanks.
about the restore problem, I don't see many solutions, you might try to prevent full restore and run scripts to populate the tables instead. As for versioning structures, do you use SSDT (SQL Server Data Tools) in VS ? You can generate DACPACs and generate diff scripts.
But what you say is that you also alter structures directly in the database ? No way to avoid that ? If not you could for example use DDL triggers (http://www.mssqltips.com/sqlservertip/2085/sql-server-ddl-triggers-to-track-all-database-changes/) to at least get notified that something changed.
One easy way to solve the NOT NULL problem is to establish default constraints (could just be an empty string, max number value for the data type, max date value, etc.). When the publish occurs the new column will be populated with the default value.
For the second issue I'd utilize post-deploy scripts in your SSDT project to keep the data in sync utilizing 'NOT EXISTS' to make incremental changes. That way, you can simply publish the database and allow the data updates to occur one after another.
I’m after a bit of advice on the best way to go about this is SQL server 2008R2 express. I have a number of applications that are in separate databases on the same server. They are all “plugins” that use a central staff/structure list that will be in a separate database. The application is in the process of being migrated from JET.
What I’m looking for is the best way of all the “plugin” databases being able to see the central database and use those tables in standard queries and views etc.
As I’m using express that rules out any replication solution and so far the only option I can think of is to use triggers or a stored procedure to “push” out all the changes to the plugins. The information needs to be populated on a near enough real time basis however the number of changes will be very small maybe up to 100 a day and the biggest table only has about 1000 rows at the moment (the staff names table).
Hopefully that will cover all everything but if anyone needs any more details then just ask
Thanks
Apologies if I've misunderstood, but from your description it sounds like all these databases are hosted on the same instance of SQL Server - it's your mention of replication that makes me uncertain.
Assuming that's the case, you should be able to replace any copies of tables from the central database which are held in the "plugin" databases with views or synonyms which reference the central tables directly, since SQL server allows you to make references between databases on the same server using three-part naming (database_name.schema_name.object_name)
For example, if each plugin db has a table StaffNames, you could replace this with a view by dropping the table, then creating a view:
drop table StaffNames
go
create view StaffNames
as
select * from <centraldbname>.<schema - probably dbo>.StaffNames
go
and your code should continue to work seamlessly, as long as permissions are set up.
Alternatively, you could replace all the references to the shared tables in the plugin databases with three-part name references to the central database, but the view method requires less work.
Is there any handy tool that can make updating tables easier? Usually I got an Excel file with the original value in one column and new value in another column. Then I write a formula in Excel to create the 'update' statement. Is there any way to simplify the updating task?
I believe the approach in SQL server 2000 and 2005 would be different, so could we discuss them both? Thanks.
In addition, these updates usually request by "non-programmer" (which means they don't understand SQL, so it may not feasible to let them do query), is there any tool that can let them update the table directly without having DBAs do this task? Also, that tool needs to limit the privilege to only modify certain tables. And better has a way rollback the change.
Create a DTS package that will import a csv file, make the updates and then archives the file. The user can drop the file in a specific folder designated for the task or this can be done by an ops person. Schedule the DTS to run every hour, day, etc.
In case your users would insist that they keep using Excel, you've got several different possibilities of getting the data transferred to SQL Server. My preferred one would be to use DTS/SSIS, as mentioned by buckbova.
However, another method is by using OPENROWSET(), which makes it possible to query your Excel file as if it was a table. I wrote a small article about it here: http://blog.hoegaerden.be/2010/03/29/retrieving-data-from-excel/
Another approach that hasn't been mentioned yet (I'm not a big fan of letting regular users edit data directly in the DB), any possibility of creating a small custom application for them?
There you go, a couple more possible solutions :-)
Valentino.
I think the best approach is to expose a view on your data accessible to users who are allowed to do updates, and set up triggers on the view to perform the actual updates on the underlying data. Restrict change to only the columns they should be changing.
This technique can work on SQL Server 2000 and 2005.
I would add audit triggers on the underlying tables so you can always track changes.
You'll have complete control, and they can connect to it with Access or whatever and perform their maintenance.
You could create some accounts in SQL Server for these users and limit their access to only certain tables and columns along with onlu select / update / insert privileges. Then you could create an access database with linked tables to these.