I have a database with mixed case, i.e testDATABASE.
I run(using ODBC) the query use database ""testDATABASE";", then I run the query use schema "PUBLIC",
the query fail with the error:
ERROR: SQL compilation error:
Object does not exist, or operation cannot be performed.
Error Code: 2043
Query = use schema "PUBLIC"
when I run it not via odbc but in the notebook it works fine.
same queries with database that does not contain mixed case works fine.
if i run use schema "testDATABASE"."PUBLIC" it runs OK via ODBC and notebook.
is there a known issue about it? how can i run it with 2 queries in ODBCand make it work?
Thanks.
In your question it looks like your use database command had double double quotes,
but your schema didn't, perhaps that might be the issue.
Overall Suggestions :
When you make object names MiXeD-CaSe it simply makes use of the objects more difficult, so I'd recommend trying to not do mixed case if you can avoid it. You may not be able to avoid this, that's OK, it's just a suggestion.
if you can't avoid it, the only time I'd use the double quotes is when the object name
(in this case, the database name) has mixed case.
In your case, you should be able to run (you may have to double-double quote it in ODBC):
use database "testDATABASE";
and then this - note no double quotes needed because it's not mixed case
use schema PUBLIC;
this document illustrates how you don't need to prefix the schema with the database:
https://docs.snowflake.com/en/sql-reference/sql/use-schema.html
something else I recommend to folks getting started, for each user I like to set all the default context items (role, warehouse, namespace)
ALTER USER rich SET DEFAULT_ROLE = 'RICH_ROLE';
ALTER USER rich SET DEFAULT_WAREHOUSE = 'RICH_WH' ;
ALTER USER rich SET DEFAULT_NAMESPACE = 'RICH_DB.TEST_SCHEMA';
Related
When creating a UserDefinedType in C# code for the sake of SQLCLR integration, it is required that you prefix a class or struct with a SqlUserDefinedType, such as here:
[SqlUserDefinedType(
Name = "g.Options",
// ...
)]
public struct Options : INullable {
// ...
}
Notice that in the "Name" parameter, I attempt to set a schema in addition to the object name. But, when I generate the script in the publish stage of a Visual Studio Database Project, I get:
CREATE TYPE [dbo].[g.Options]
There is no "schema" parameter for SqlUserDefinedType.
I do believe I can write the T-SQL script to make the type from the assembly specifically, but I would like to avoid that, as I plan on putting most of my types in different schemas and wouldn't be happy to have to register via explicit TSQL on each one.
EDIT:
As Solomon Rutzky points out, you can set the Default Schema in the project properties. It is certainly no substitute for something akin to a 'schema' parameter in SqlUserDefinedType, particularly if you want to work with multiple schemas, but it certainly gets the job done for many people's needs.
A post-deployment script will technically get the job done, but unfortunately, the comparison engine doesn't know about the post-deployment logic and so will perpetually register the schema difference as something that needs to be changed. So all your affected objects will be dropped and re-created on every publish regardless of whether you changed them or not.
The Schema name is specified in a singular location per each project, not per object.
You can set it in Visual Studio via:
"Project" (menu) -> "{project_name} Properties..." (menu option) -> "Project Settings" (tab)
On the right side, in the "General" section, there is a text field for "Default schema:"
OR:
you can manually edit your {project_name}.sqlproj file, and in one of the top <PropertyGroup> elements (one that does not have a "Condition" attribute; the first such element is typically used), you can create (or update if it already exists) the following element:
<DefaultSchema>dbo</DefaultSchema>
HOWEVER, if you are wanting to set one object (such as a UDT) to a different Schema name than the rest of the objects are using, that would have to be done manually in a Post Release SQL script. You can add a SQL script to your project, then in the Solution Explorer on the right side, select the SQL script, go to its Properties, and for "BuildAction", select "PostDeploy". In that post-deploy script, issue an ALTER SCHEMA statement:
ALTER SCHEMA [g] TRANSFER TYPE::dbo.Options;
I'm trying to understand how to configure my Hibernate to work properly with my MSSQL DB and its schemas.
The problem is that during validation of tables, it logs (for every table):
org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl
- HHH000262: Table not found SHARED_CONFIGURATION
I debugged Hibernate to find out what causes this and found that it calls something like:
EXECUTE [mydb]..sp_columns N'SHARED_CONFIGURATION',N'',N'mydb'
Notice that 2nd parameter is schema name and there is passed empty string. When I tried to run this query against DB it returned empty result set. But when I passed 'dbo' as 2nd parameter the result set was not empty (meaning that Hibernate should call this instead).
OK so I was like it seems that I need to define schema. But both setting hibernate.default_schema or setting schema in #Table annotation on my entites threw exception:
Schema-validation: missing table [SHARED_CONFIGURATION]
So now I'm wondering what is the real problem. I also wanted to set default schema in my DB but was not allowed (Cannot alter the user 'sa', because it does not exist or you do not have permission.) even when executed with user 'sa' itself:
ALTER USER sa WITH DEFAULT_SCHEMA = dbo;
Note that this happens with any driver (JTDS, official MS driver..)
Can someone explain what is happening here and how "correctly" get rid of that warning message in log that says table does not exist even when it exists (and application is able to run properly with the database)?
I had the same problem and solved by setting the property hibernate.hbm2ddl.jdbc_metadata_extraction_strategy to individually
I added field to Progress databyse by
ADD FIELD filedName on TABLEName...
and now I want to change/modify this field (PRECISION or FORMAT or something else...)
What syntax will be correct ? I tried like this:
UPDATE FIELD
MODIFY FIELD
ALTER FIELD
I tried aldo sql notation: alter table
but nothing works.
Could You help me please with syntax to modify field ?
If you are using the 4GL engine (you are using _progres or prowin32 to start a session) then you want to use the "data dictionary" tool to create DDL. You run "dict.p" to access that tool. i.e.: _progres dbName -p dict.p
This will allow you to create tables, define fields and indexes etc. If you want to export the definitions you use the "admin" sub-menu to dump a ".df" file. You can manually edit the output but you need to know what you are doing. It is mostly obvious but it is not documented or supported.
Do NOT imagine that using SQL from within a 4GL session will work. It will not. The 4GL engine internally supports a very limited subset of sql-89. It is mostly there as a marketing ploy. There is nothing but pain and agony down that road. Don't go there. If you are using _progres or prowin32 you are using the 4gl engine.
If you are using SQL92 externally (via sqlexp or some other 3rd party SQL tool that uses an ODBC or JDBC connection) then normal SQL stuff should work but you might want to spend some quality time with the documentation to understand the areas where OpenEdge differs from Oracle or Microsoft or whatever sql dialect you are used to.
Tom, thanks for Your answer.
I use OpenEdge Release 10.1A02 on Linux.
I can make a dump.df file and I can also add new table from file (similar df).
But why I cant modify any added fields ? Ofcorse i can use "p" editor and do it manually from menu Tools/Data Editor/Schema and add new table but it's risky if I tell database administrators to do it manually on each enviroment (specially on production).
if exists syntax:
ADD FIELD filedName on TABLEName...
why there is no
Modify FIELD filedName on TABLEName... ?
Bartek.
Just in case - here are some working examples of .df files in OE 11.3 (could be they are valid in other versions too):
Rename column:
RENAME FIELD "OldName" OF "TableName" TO "NewName"
Other properties:
UPDATE FIELD "FieldName" OF "TableName"
FORMAT "Yes/No"
LABEL "Label"
VALMSG "Validation message..."
Of course the database must be shut down first (apply those changes in single-user mode).
Edit: I'm aware that SELECT * is bad practice, but it's used here just to focus the example SQL on the table statement rather than the rest of the query. Mentally exchange it for some column names if you prefer.
Given a database server MyServer (which we are presently connected to in SSMS), with several databases MyDb1, MyDb2, MyDb3 etc and default schema dbo, are any of the following equivilant queries (they will all return exactly the same result set) more "optimal" than the others?
SELECT * FROM MyServer.MyDb1.dbo.MyTable
I was told that this method (explicitly providing the full database name including server name) treats MyServer as a linked server and causes the query to run slower. Is this true?
SELECT * FROM MyDb1.dbo.MyTable
The server name isn't required as we're already connected to it, but would this run 'faster' than the above?
USE MyDb1
GO
SELECT * FROM dbo.MyTable
State the database we're using initially. I can't imagine that this is any better than the previous for a single query, but would it be more optimal for subsequent queries on the same database (ie, if we had more SELECT statements in the same format below this)?
USE MyDb1
GO
SELECT * FROM MyTable
As above, but omitting the default schema. I don't think this makes any difference. Does it?
SQL Server will always look for the objects you sepcify within the current "Context" if you do not specify a fully qualified name.
Is one faster than the other, sure, the same as a file name on your hard drive of "This is a really long name for a file but at long as it is under 254 it is ok.txt" will take up more hard-drive (toc) space than "x.txt". Will you ever notice it, no!
As far as the "USE" keyword, this just sets the context for you, so you dont have to fully qualify object names. The "USE" keyword is NOT sql, you cannot use in in another application (like a vb/c# app) or within a stored procedure but it is like the "GO" keyword in that it tells SSMS to do something, change the context.
I am trying to implement this solution:
NHibernate-20-SQLite-and-In-Memory-Databases
The only problem is that we have hbms like this:
<class name="aTable" table="[dbo].[aTable]" mutable="true" lazy="false">
with [dbo] in the table name, because we are working with mssql, and this does not work with Sqlite.
I found this posting on the rhino-tools-dev group where they talk about just removing the schema from the mapping, but on NH2 there doesn't seem to be a classMapping.Schema.
There is a classMapping.Table.Schema, but it seems to be read-only. For example, this doesn't work:
foreach (PersistentClass cp in configuration.ClassMappings) {
// Does not work - throws a
//System.IndexOutOfRangeException: Index was outside the bounds of the array.
cp.Table.Schema = "";
}
Is there a way to tell Sqlite to ignore the [dbo] (I tried attach database :memory: as dbo, but this didn't seem to help)?
Alternatively, can I programmatically remove it from the classmappings (unfortunately changing the hbms is not possible right now)?
We had too many problems with SQLite which eventually pushed us to switch to SQL Express.
Problems I remember:
SQLite, when used in-memory, discards the database when Session is closed
SQLite does not support bunch of SQL constructs such basic ones as ISNULL, but also more advanced like common table expressions and others added in SQL 2005 and 2008. This becomes important when you start writing complex named queries.
SQLite's datetime has bigger range of possible values than SQL Server's
The API NHibernate uses for SQLite behaves differently than ADO.NET for MS SQL Server when used in scope of transaction. One example is the hbm-to-ddl tool whose Execute method does not work inside transaction with SQL Server but works fine with SQLite.
To summarize, SQLite-based unit-testing is very far from being conclusively representative of the issues you'll encounter when using MS SQL Server in PROD and therefore undermines the credibility of unit-testing overall.
We are using Sqlite to run unit tests with NH 2.0.1. Actually, I didn't run into this problem. I just didn't specify dbo, I think it is default on SqlServer.
By the way, there is a default_schema parameter in the configuration file. This is actually the database name, but you can try putting the dbo there, only for the SqlServer configuration of course.
After looking through the source of NH and some experimenting i think i found a simple workaround -
foreach (PersistentClass cp in configuration.ClassMappings)
{
// Input : [dbo].[Tablename] Output : Tablename
cp.Table.Name = Regex.Replace(cp.Table.Name, #"^\[.*\]\.\[", "");
cp.Table.Name = Regex.Replace(cp.Table.Name, #"\]$", "");
// just to be sure
cp.Table.Schema = null;
}
note that i can set Table.Schema to null while an empty string threw an exception ...
thanks for the answers !