Due to an error in our build process, we had the following initial situation:
Connection string of the datasource
jdbc:jtds:sqlserver://db01.example.de/AppDB_Example_Prod
Entity files have the catalog "AppDB_Example"
A stored procedure that is called using a named query
{ CALL usp_performSearch(:searchQuery) }
As you can see, we have a missmatch in the connection string and the catalogs. Normally, they must/should be equal.
At runtime, we execute the stored procedure and retrieved the results from the database AppDB_Example_Prod, as this is the database we are connected to. After that, we load related entities using the entityManager from the database AppDB_Example, as this is the catalog mentioned in the annotation of the entity. JPA is doing this itself, we do not have any influence on this.
Searching through the internet, I've read that you should create multiple persistence units / data sources, to work with multiple databases.
Does it work, as it is supposed do do or did we hit a bug?
Could this be used without any problem to work with multiple database via one connection string?
Does this only work with SQLServer (MSSQL) and so it will fail if we may change to an other database in the future?
This feature isn't supported by JPA itself but depends on the database and the permissions of your connection (= usually the DB user which you use to connect).
JPA doesn't care much about the schema. If you don't specify one, then JPA will not send schema information to the database. Usually, there is a default schema attached to the user (or one is specified via the JDBC connection settings). That way, the database knows where to look.
If you specify a schema, then JPA will include this information in the SQL it sends to the database. That means instead of TABLE.COLUMN, it will generate SCHEMA.TABLE.COLUMN. Whether this works depends only on the database (and maybe the JDBC driver) but not on JPA.
All SQL databases should allow you to look at other schemas than the default one if your DB user has the necessary permissions.
Related
Does Snowflake have anything in the information schema (or elsewhere) where I can query the servername of the server I am actively connected to?
I am developing in a BI tool that connects to a Snowflake data warehouse. I am seeing some anomalies in the data. Although my connection properties are supposedly pointing me to one server & database, I am not convinced that is where the data is actually coming from. I'd like to query Snowflake in the BI tool.
I've already checked the INFORMATION_SCHEMA.DATABASES through the BI tool, and the database name is correct. I'd like to also verify the servername as well.
There is nothing in information schema you can find on the actual underlying resources you are connected too. From the documentation:
The data objects stored by Snowflake are not directly visible nor accessible by customers; they are only accessible through SQL query operations run using Snowflake.
But I doubt that is what is causing the anomaly. In snowflake the data is coming from Cloud Storage. There is a clear seperation of compute & storage. A virtual warehouse in snowflake is essentially a query execution engine. So even if you connect to a different server that should not matter in terms of what the query returns.
See also: https://docs.snowflake.net/manuals/user-guide/intro-key-concepts.html#database-storage.
I think this might be what you're looking for:
CURRENT_ACCOUNT()
CURRENT_REGION()
But you may have to do a bit of work to convert the region to the format expected in the url (maybe create a mapping or a UDF). For example, AWS_US_EAST_1 would need to be converted to us-east-1.
I'm tired of searching for this, but I couldn't find anything.
I have three databases in SQL Server and although all stored procedures are in the Main database, they work with tables from the other databases.
My question is: if you have the query
select name
from SecondDatabase.dbo.SomeTable
where id = 56
and this query is stored in the main database, will it run in the main database and go all the way to the second database and returns the data, or will it run in the second database and you have the select result directly?
(hope you understand my question)
I think you are misunderstanding the difference between a Database and an Instance.
An instance is the software running the SQL service. Each instance can have multiple databases. For example, there is a master database and a tempdb database for each instance of SQL Server, these are system databases. You can create any number of user databases. All these databases will be handled by the same SQL Server instance (on the same machine).
A particular client session is connected first to an instance and then to a particular database, thats why you include which database you will connect to by default on connection strings (or by login). When you write select name from SecondDatabase.dbo.SomeTable, you are telling the SQL service to retrieve data from the SecondDatabase, even if your session is linked to any other database. The engine will then use your login credential to match a user of the other database (since users go by database and logins by instance) to validate if it has enough privileges to query that table, before searching for the data.
A complete different story would be trying to access data from another instance (machine), in which you will need a linked server, a openrowset or such.
use FirstDatabase
select name
from SecondDatabase.dbo.SomeTable
where id = 56
Question:
will it run in the main database and go all the way to the second
database and returns the data, or will it run in the second database
and you have the select result directly?
Your first assumption is correct:
This query will run in a first database, it will use context and all settings (ANSI, query optimizer and statistic related) of the first database but will get data from a table of the second database.
Just an example from a life: if database have to stay in an old compatibility mode, but new T-SQL features need occasionally to be used, query can switch context to tempdb (which normally set to the latest compatibility level) and run queries referencing data from any other database where access is granted. Usage of those new features will not raise exception
The (now edited) query above will always execute on SecondDatabase.dbo.SomeTable even if the active database context was another database and even if the active user had a different default schema. This is because the object SomeTable is qualified with the schema and the schema owner.
Test to illustrate that the following still returns the executed results (assuming the objects exist and the active user context has access to them)
USE [OtherDatabaseSchema]
GO
SELECT TOP 10 *
FROM [SecondDatabase].[dbo].[SomeTable]
I have an access 2003 database that holds all of my business data. This access database gets updated every few hours during the day.
We're currently writing a website that will need to use the data from the access database. This website (for the time being) will have only read only capabilities. Meaning there will only need to be one way transfer of data (Access -> SQL).
I'm imaging there's a way to perform this data migration from access to SQL server programatically. Does anyone have any links to something I can read about?
If this practice sounds odd, and you'd like to suggest another way to do this (or a situation where data can go both ways (Access -> SQL, SQL -> Access), that's perfectly fine.
The company is going to continue using Access 2003 for their business functionality. There's no way around that. But I'd like to build the (readonly) website on top of SQL Server.
The strategy you outlined can be very challenging. You could use INSERT queries to copy new Access rows to SQL Server, as described in another answer.
However, if you have changes to existing Access rows, and you also want those changes propagated to SQL Server, it won't be so simple. And it will be more complicated still if you want deleted Access rows deleted from SQL Server, too.
It seems more reasonable to me to use a different approach. Migrate the data to SQL Server once. Then replace the tables in your Access database with ODBC links to the SQL Server tables. Thereafter, changes to the data from within your Access application will not require a separate synchronization step ... they will already be in SQL Server. And you won't need to write any code to synchronize them.
If your concern is that the connections between the web server and SQL Server be read-only, just set them up that way. You can still independently allow read-write permissions for your Access application.
To do the initial data migration and set the SQL Server automatically, I would use the SQL Server Migration Assistant. The only thing you should definitely change that I can think of would be to turn off the Identity property on any columns that have it - to be explained below (MS Access calls Identity autonumber). Once you have your tables loaded, you can set up a dsnless connection to the database (and tables) you just created.
I haven't used the method just linked, but I believe it allows you to use SQL Server authentication to connect to the db. The benefit of using this method is you can easily change which SQL Server instance and/or database your are connecting to for development and testing.
There might be a better, automated way, but you can create several insert queries doing left joins from the primary key of the Access table to the SQL Server table, and putting a WHERE clause that specifies the SQL Server PrimaryKey must be null. This is why you need to turn off the Identity property in the SQL Server tables, so that you can insert the new data.
Finally, put the name of each query in one function, then run the function periodically.
I have used Microsoft's free SQL Server Migration Assistant (SSMA) to migrate Access to SQL Server. The tool is very simple to use. The only problem I have encountered with the tool was overloaded data types when migrating. What I mean by this is a small string will get converted to a NVARCHAR(MAX) in some instances. Otherwise, the tool is very handy and can be reused after setting up a 'profile'.
I have two databases, one on a remote server the other local. (SQL Server 2008)
The database on my local server has the entire structure setup but no data. I would like to copy the data from the remote server to my server and I am wondering the best method in which to do this.
The main issue I am experiencing is the user that I have to the remote database has limited permissions. I cannot read the stored procedures, user defined functions so when I use Import/Export wizard I do not get the schema etc. So a regular dump/restore is not working for me as it restores the tables without the Primary Keys/Foreign Keys and the stored procedures.
I'd like to do this,
INSERT INTO localtable SELECT * FROM remotedb.table
I was having issues because of the IDENTITY fields and I had to explicitly name all of the columns. Also I am not sure if SQL Server Management Studio allows you to use two different databases, remote and local, so I was looking for any advice.
I have also tried applications like SQL FTP and Backup and it fails because it runs out of memory (I have 16GB of memory on the machine and the DB is like 4GB). I also can use the SQL Server import/export wizard but then I don't get the schema information. I also tried SQL Compare from Red Gate and it runs into issues with the permissions. Unfortunately I do not have the time to request and gain access to a new user so I was hoping someone had a creative idea.
You can definitely use SQL Server Backups for this. It will not run out of memory. If it does please tell us the message (because likely you are misinterpreting it). This is the fastest possible and the most complete solution.
You can tell the export wizard to also script the schema. It is hidden under "advanced" somewhere (terrible UI). But the script will be extremely big and I know of no way to execute it.
You can drop all schema objects except PKs in the target database. Then you can use remote queries to copy all the data over. You will not get any problems with foreign keys and identity columns if you drop the beforehand. After you are done you can recreate all those objects. It is probably best if you use a transaction for all of this because that way you get consistent source data from a point-in-time.
In SQL Server 2005, a snapshot of a database can be created that allows read-only access to a database, even when the database is in "recovery pending" mode. One use case for this capability is in creating a reporting database that references a copy of a production database, which is kept current through log-shipping.
In this scenario, how can I implement security on the "snapshot" database that is different from the "production" source database?
For example, in the production database, all access to data is through stored procedures, while in the snapshot database users are allowed to select from table in the database for reporting purposes. The problem the I see is that security for the snapshot database is inherited from the source database, and can not be changed because snapshots are strictly read-only.
Are you able to manage permissions on this database? Would adding a separate user who only has read access to a database be sufficient for this type of scenario? This could be a read-only user on the main database, but is only effectively used on the snapshot db.
i.e. Add a new user, readerMan5000 who is only given select access, to the database in question. Then require users to authenticate through that new credential.
Note to future commenters, you may want to read:
http://www.simple-talk.com/sql/database-administration/sql-server-2005-snapshots/
or
http://msdn.microsoft.com/en-us/library/ms187054(SQL.90).aspx
before you open your big mouth like me. :)
You can't change permissions after you take the snapshot, but here's one workaround: instead of having them access the tables directly, require them to use views instead. If the views are used only for reporting, then you can set tight security on them in the original database, and then have the users hit those views in the snapshot. You'll need to restrict access on the underlying tables though if you want it to be effective.