progress - syntax to modify FIELD - database

I added field to Progress databyse by
ADD FIELD filedName on TABLEName...
and now I want to change/modify this field (PRECISION or FORMAT or something else...)
What syntax will be correct ? I tried like this:
UPDATE FIELD
MODIFY FIELD
ALTER FIELD
I tried aldo sql notation: alter table
but nothing works.
Could You help me please with syntax to modify field ?

If you are using the 4GL engine (you are using _progres or prowin32 to start a session) then you want to use the "data dictionary" tool to create DDL. You run "dict.p" to access that tool. i.e.: _progres dbName -p dict.p
This will allow you to create tables, define fields and indexes etc. If you want to export the definitions you use the "admin" sub-menu to dump a ".df" file. You can manually edit the output but you need to know what you are doing. It is mostly obvious but it is not documented or supported.
Do NOT imagine that using SQL from within a 4GL session will work. It will not. The 4GL engine internally supports a very limited subset of sql-89. It is mostly there as a marketing ploy. There is nothing but pain and agony down that road. Don't go there. If you are using _progres or prowin32 you are using the 4gl engine.
If you are using SQL92 externally (via sqlexp or some other 3rd party SQL tool that uses an ODBC or JDBC connection) then normal SQL stuff should work but you might want to spend some quality time with the documentation to understand the areas where OpenEdge differs from Oracle or Microsoft or whatever sql dialect you are used to.

Tom, thanks for Your answer.
I use OpenEdge Release 10.1A02 on Linux.
I can make a dump.df file and I can also add new table from file (similar df).
But why I cant modify any added fields ? Ofcorse i can use "p" editor and do it manually from menu Tools/Data Editor/Schema and add new table but it's risky if I tell database administrators to do it manually on each enviroment (specially on production).
if exists syntax:
ADD FIELD filedName on TABLEName...
why there is no
Modify FIELD filedName on TABLEName... ?
Bartek.

Just in case - here are some working examples of .df files in OE 11.3 (could be they are valid in other versions too):
Rename column:
RENAME FIELD "OldName" OF "TableName" TO "NewName"
Other properties:
UPDATE FIELD "FieldName" OF "TableName"
FORMAT "Yes/No"
LABEL "Label"
VALMSG "Validation message..."
Of course the database must be shut down first (apply those changes in single-user mode).

Related

use database with mixed case is not working via ODBC

I have a database with mixed case, i.e testDATABASE.
I run(using ODBC) the query use database ""testDATABASE";", then I run the query use schema "PUBLIC",
the query fail with the error:
ERROR: SQL compilation error:
Object does not exist, or operation cannot be performed.
Error Code: 2043
Query = use schema "PUBLIC"
when I run it not via odbc but in the notebook it works fine.
same queries with database that does not contain mixed case works fine.
if i run use schema "testDATABASE"."PUBLIC" it runs OK via ODBC and notebook.
is there a known issue about it? how can i run it with 2 queries in ODBCand make it work?
Thanks.
In your question it looks like your use database command had double double quotes,
but your schema didn't, perhaps that might be the issue.
Overall Suggestions :
When you make object names MiXeD-CaSe it simply makes use of the objects more difficult, so I'd recommend trying to not do mixed case if you can avoid it. You may not be able to avoid this, that's OK, it's just a suggestion.
if you can't avoid it, the only time I'd use the double quotes is when the object name
(in this case, the database name) has mixed case.
In your case, you should be able to run (you may have to double-double quote it in ODBC):
use database "testDATABASE";
and then this - note no double quotes needed because it's not mixed case
use schema PUBLIC;
this document illustrates how you don't need to prefix the schema with the database:
https://docs.snowflake.com/en/sql-reference/sql/use-schema.html
something else I recommend to folks getting started, for each user I like to set all the default context items (role, warehouse, namespace)
ALTER USER rich SET DEFAULT_ROLE = 'RICH_ROLE';
ALTER USER rich SET DEFAULT_WAREHOUSE = 'RICH_WH' ;
ALTER USER rich SET DEFAULT_NAMESPACE = 'RICH_DB.TEST_SCHEMA';

RepoDb does not seem to work for SQL Server tables with dot in the name

I'm starting to use RepoDb, but my SQL Server 2016 database has tables with a dot in the middle like this: User.Data.
Moving from full .NET Entity Framework to RepoDb, I'm facing this issue. I'm using the fluent mapping and I wrote something like this:
FluentMapper
.Entity<UserData>()
.Table("[User.Data]")
.Primary(u => u.UserId)
I get the exception: MissingFieldsException and it says:
There are no database fields found for table '[User.Data]'. Make sure that the target table '[User.Data]' is present in the database and/or at least a single field is available.
Just for curiosity, I created a table UserData with the same attributes and primary key and it worked great (change the fluent mapper whit: .Table("[UserData]").
Am I missing something?
Thanks for helping me
The support to this in only available at RepoDb.SqlServer (v1.0.13) version or above. You can use either of the approaches below. Make sure to specify the quotes if you are using the database and schema.
Via built-in MapAttribute.
[Map("[User.Data]")]
Via TableAttribute of the System.ComponentModel.DataAnnotations namespace.
[Table("[User.Data]")]
Via FluentMapper as you did.
FluentMapper
.Entity<UserData>()
.Table("[User.Data]");

Querying the Audit Log through Database [SQL Server]

I would like to take the Audit History provided by Enterprise Architect and create a SQL query to report through a BI tool that will allow myself and other users to search the history of an object but I am having a little trouble understanding the audit table: t_snapshot.
From what I can tell, t_snapshot has a Style column that contains "INSERT," "UPDATE," and "DELETE" which would tell me what is happening and the Notes column can tell me what object it is referencing but so far I've only been able to get a partial picture. What I have not been able to deduce is when any event occurred or which user made the change.
If anyone has encountered this problem in the past, your input would be appreciated.
Well, I don't know whether you really want to touch that.
There's a column called BinContent which contains what you are looking for. It looks like
<LogItem><Row Number="0"><Column Name="object_id"><Old Value="1797"/><New Value="1797"/></Column><Column Name="name"><Old Value="CB"/><New Value="CBc"/></Column><Column Name="modifieddate"><Old Value="07.12.2018"/><New Value="11.12.2018"/></Column><appliesTo><Element Type="Action"/></appliesTo></Row><Details User="Thomas" DateTime="2018-12-11 08:22:59"/></LogItem>
So basically some XML describing the change including the plain text user name.
The bincontent column(s) are actually zips which contain a single file str.dat holding the above information.
Good luck.

Export tables from SQL Server to be imported to Oracle 10g

I'm trying to export some tables from SQL Server 2005 and then create those tables and populate them in Oracle.
I have about 10 tables, varying from 4 columns up to 25. I'm not using any constraints/keys so this should be reasonably straight forward.
Firstly I generated scripts to get the table structure, then modified them to conform to Oracle syntax standards (ie changed the nvarchar to varchar2)
Next I exported the data using SQL Servers export wizard which created a csv flat file. However my main issue is that I can't find a way to force SQL Server to double quote column names. One of my columns contains commas, so unless I can find a method for SQL server to quote column names then I will have trouble when it comes to importing this.
Also, am I going the difficult route, or is there an easier way to do this?
Thanks
EDIT: By quoting I'm refering to quoting the column values in the csv. For example I have a column which contains addresses like
101 High Street, Sometown, Some
county, PO5TC053
Without changing it to the following, it would cause issues when loading the CSV
"101 High Street, Sometown, Some
county, PO5TC053"
After looking at some options with SQLDeveloper, or to manually try to export/import, I found a utility on SQL Server management studio that gets the desired results, and is easy to use, do the following
Goto the source schema on SQL Server
Right click > Export data
Select source as current schema
Select destination as "Oracle OLE provider"
Select properties, then add the service name into the first box, then username and password, be sure to click "remember password"
Enter query to get desired results to be migrated
Enter table name, then click the "Edit" button
Alter mappings, change nvarchars to varchar2, and INTEGER to NUMBER
Run
Repeat process for remaining tables, save as jobs if you need to do this again in the future
Use the SQLDeveloper migration tools
I think quoting column names in oracle is something you should not use. It causes all sort of problems.
As Robert has said, I'd strongly advise agains quoting column names. The result is that you'd have to quote them not only when importing the data, but also whenever you want to reference that column in a SQL statement - and yes, that probably means in your program code as well. Building SQL statements becomes a total hassle!
From what you're writing, I'm not sure if you are referring to the column names or the data in these columns. (Can SQLServer really have a comma in the column name? I'd be really surprised if there was a good reason for that!) Quoting the column content should be done for any string-like columns (although I found that other characters usually work better as the need to "escape" quotes becomes another issue). If you're exporting in CSV that should be an option .. but then I'm not familiar with the export wizard.
Another idea for moving the data (depending on the scale of your project) would be to use an ETL/EAI tool. I've been playing around a bit with the Pentaho suite and their Kettle component. It offered a good range of options to move data from one place to another. It may be a bit oversized for a simple transfer, but if it's a big "migration" with the corresponding volume, it may be a good option.

When creating a new table how do I change the default column definition in SQL Server 2008?

When creating a new table in SSMS - right click on the "Tables" node, choose "New Table..." - the default definition for a new column is nchar(10).
Can I change that? Let's say I would like the default column definition to be varchar(5) whenever someone creates a new table.
I can't find any sort of option to change that, I'm thinking maybe it's a registry setting?
In the Registry:
Path - HKEY_CURRENT_USER\Software\Microsoft\Microsoft SQL Server\100\Tools\Shell\DataProject
Name - SSVDefaultColumnType
Also look at SSVDefault*Length (e.g. SSVDefaultCharLength) to set the length.
All caveats about changing the registry apply.
Thanks for that. One small thing: when there have been multiple installationss/ versions of SSMS installed, it's necessary to ensure that they right versions of the keys are changed.
Seems obvious, but I got caught out.
You can't. Actually, you can: see Darryl Peterson's answer
To be honest, you're quicker to use CREATE TABLE in a query windows and type it all manually. You pick the syntax up quick enough.
To start, you could look at View..Template Explorer. There is a "Create Table" template.

Resources