auto increment ID in H2 database - database

Is there a way to have an auto_incrementing BIGINT ID for a table.
It can be defined like so
id bigint auto_increment
but that has no effect (it does not increment automatically).
I would like to insert all fields but the ID field - the ID field should be provided by the DBMS.
Or do I need to call something to increment the ID counter?

It works for me. JDBC URL: jdbc:h2:~/temp/test2
drop table test;
create table test(id bigint auto_increment, name varchar(255));
insert into test(name) values('hello');
insert into test(name) values('world');
select * from test;
result:
ID NAME
1 hello
2 world

IDENTITY
The modern approach uses the IDENTITY type, for automatically generating an incrementing 64-bit long integer.
This single-word syntax used in H2 is an abbreviated variation of GENERATED … AS IDENTITY defined in the SQL:2003 standard. See summary in PDF document SQL:2003 Has Been Published. Other databases are implementing this, such as Postgres.
CREATE TABLE event_
(
pkey_ IDENTITY NOT NULL PRIMARY KEY , -- ⬅ `identity` = auto-incrementing long integer.
name_ VARCHAR NOT NULL ,
start_ TIMESTAMP WITH TIME ZONE NOT NULL ,
duration_ VARCHAR NOT NULL
)
;
Example usage. No need to pass a value for our pkey column value as it is being automatically generated by H2.
INSERT INTO event_ ( name_ , start_ , stop_ )
VALUES ( ? , ? , ? )
;
And Java.
ZoneId z = ZoneId.of( "America/Montreal" ) ;
OffsetDateTime start = ZonedDateTime.of( 2021 , Month.JANUARY , 23 , 19 , 0 , 0 , 0 , z ).toOffsetDateTime() ;
Duration duration = Duration.ofHours( 2 ) ;
myPreparedStatement.setString( 1 , "Java User Group" ) ;
myPreparedStatement.setObject( 2 , start ) ;
myPreparedStatement.setString( 3 , duration.toString() ) ;
Returning generated keys
Statement.RETURN_GENERATED_KEYS
You can capture the value generated during that insert command execution. Two steps are needed. First, pass the flag Statement.RETURN_GENERATED_KEYS when getting your prepared statement.
PreparedStatement pstmt = conn.prepareStatement( sql , Statement.RETURN_GENERATED_KEYS ) ;
Statement::getGeneratedKeys
Second step is to call Statement::getGeneratedKeys after executing your prepared statement. You get a ResultSet whose rows are the identifiers generated for the created row(s).
Example app
Here is an entire example app. Running on Java 14 with Text Blocks preview feature enabled for fun. Using H2 version 1.4.200.
package work.basil.example;
import org.h2.jdbcx.JdbcDataSource;
import java.sql.*;
import java.time.*;
import java.util.Objects;
public class H2ExampleIdentity
{
public static void main ( String[] args )
{
H2ExampleIdentity app = new H2ExampleIdentity();
app.doIt();
}
private void doIt ( )
{
JdbcDataSource dataSource = Objects.requireNonNull( new JdbcDataSource() ); // Implementation of `DataSource` bundled with H2.
dataSource.setURL( "jdbc:h2:mem:h2_identity_example_db;DB_CLOSE_DELAY=-1" ); // Set `DB_CLOSE_DELAY` to `-1` to keep in-memory database in existence after connection closes.
dataSource.setUser( "scott" );
dataSource.setPassword( "tiger" );
String sql = null;
try (
Connection conn = dataSource.getConnection() ;
)
{
sql = """
CREATE TABLE event_
(
id_ IDENTITY NOT NULL PRIMARY KEY, -- ⬅ `identity` = auto-incrementing integer number.
title_ VARCHAR NOT NULL ,
start_ TIMESTAMP WITHOUT TIME ZONE NOT NULL ,
duration_ VARCHAR NOT NULL
)
;
""";
System.out.println( "sql: \n" + sql );
try ( Statement stmt = conn.createStatement() ; )
{
stmt.execute( sql );
}
// Insert row.
sql = """
INSERT INTO event_ ( title_ , start_ , duration_ )
VALUES ( ? , ? , ? )
;
""";
try (
PreparedStatement pstmt = conn.prepareStatement( sql , Statement.RETURN_GENERATED_KEYS ) ;
)
{
ZoneId z = ZoneId.of( "America/Montreal" );
ZonedDateTime start = ZonedDateTime.of( 2021 , 1 , 23 , 19 , 0 , 0 , 0 , z );
Duration duration = Duration.ofHours( 2 );
pstmt.setString( 1 , "Java User Group" );
pstmt.setObject( 2 , start.toOffsetDateTime() );
pstmt.setString( 3 , duration.toString() );
pstmt.executeUpdate();
try (
ResultSet rs = pstmt.getGeneratedKeys() ;
)
{
while ( rs.next() )
{
int id = rs.getInt( 1 );
System.out.println( "generated key: " + id );
}
}
}
// Query all.
sql = "SELECT * FROM event_ ;";
try (
Statement stmt = conn.createStatement() ;
ResultSet rs = stmt.executeQuery( sql ) ;
)
{
while ( rs.next() )
{
//Retrieve by column name
int id = rs.getInt( "id_" );
String title = rs.getString( "title_" );
OffsetDateTime odt = rs.getObject( "start_" , OffsetDateTime.class ); // Ditto, pass class for type-safety.
Instant instant = odt.toInstant(); // If you want to see the moment in UTC.
Duration duration = Duration.parse( rs.getString( "duration_" ) );
//Display values
ZoneId z = ZoneId.of( "America/Montreal" );
System.out.println( "id_" + id + " | start_: " + odt + " | duration: " + duration + " ➙ running from: " + odt.atZoneSameInstant( z ) + " to: " + odt.plus( duration ).atZoneSameInstant( z ) );
}
}
}
catch ( SQLException e )
{
e.printStackTrace();
}
}
}
Next, see results when run.
Instant, OffsetDateTime, & ZonedDateTime
At the time of this execution, my JVM’s current default time zone is America/Los_Angeles. At the point in time of the stored moment (January 23, 2021 at 7 PM in Québec), the zone America/Los_Angeles had an offset-from-UTC of eight hours behind. So the OffsetDateTime object returned by the H2 JDBC driver is set to an offset of -08:00. This is a distraction really, so in real work I would immediately convert that OffsetDateTime to either an Instant for UTC or ZonedDateTime for a specific time zone I had in mind. Be clear in understanding that the Instant, OffsetDateTime, and ZonedDateTime objects would all represent the same simultaneous moment, the same point on the timeline. Each views that same moment through a different wall-clock time. Imagine 3 people in California, Québec, and Iceland (whose zone is UTC, an offset of zero) all talking in a conference call end they each looked up at the clock on their respective wall at the same coincidental moment.
generated key: 1
id_1 | start_: 2021-01-23T16:00-08:00 | duration: PT2H ➙ running from: 2021-01-23T19:00-05:00[America/Montreal] to: 2021-01-23T21:00-05:00[America/Montreal]
By the way, in real work on an app booking future appointments, we would use a different data type in Java and in the database.
We would have used LocalDateTime and ZoneId in Java. In the database, we would have used a data type akin to the SQL standard type TIMESTAMP WITHOUT TIME ZONE with a second column for the name of the intended time zone. When retrieving values from the database to build an scheduling calendar, we would apply the time zone to the stored date-time to get a ZonedDateTime object. This would allow us to book appointments for a certain time-of-day regardless of changes to the offset-from-UTC made by the politicians in that jurisdiction.

Very simple:
id int auto_increment primary key
H2 will create Sequence object automatically

You can also use default:
create table if not exists my(id int auto_increment primary key,s text);
insert into my values(default,'foo');

id bigint(size) zerofill not null auto_increment,

Related

Flink Table API -> Streaming Sink?

I see examples that convert a Flink Table object to a DataStream and run StreamExecutionEnvironment.execute.
how would I code + run a continuous query that writes to a Streaming Sink with the table API without converting to a DataStream.
It seems this must be possible, because otherwise what is the purpose of specifying streaming sink Table Connectors?
The Table API docs list continuous queries and dynamic tables, yet most of the actual Java APIs and code examples seem to only use the table API for batch.
EDIT: To show David Anderson what I'm trying, here are the three Flink SQL CREATE TABLE statements on top of analogous Derby SQL tables.
I see the JDBC table connector sink supports streaming, but am I not configuring this correctly? I don't see anything that I'm overlooking.
https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/jdbc.html
FYI, when I get my toy example working, I am planning on using Kafka in production for input/output stream-like data and JDBC/SQL for the lookup table.
CREATE TABLE LookupTableFlink (
`lookup_key` STRING NOT NULL,
`lookup_value` STRING NOT NULL,
PRIMARY KEY (lookup_key) NOT ENFORCED
) WITH (
'connector' = 'jdbc',
'url' = 'jdbc:derby:memory:myDB;create=false',
'table-name' = 'LookupTable'
),
CREATE TABLE IncomingEventsFlink (
`field_to_use_as_lookup_key` STRING NOT NULL,
`extra_field` INTEGER NOT NULL,
`proctime` AS PROCTIME()
) WITH (
'connector' = 'jdbc',
'url' = 'jdbc:derby:memory:myDB;create=false',
'table-name' = 'IncomingEvents'
), jdbcUrl);
CREATE TABLE TransformedEventsFlink (
`field_to_use_as_lookup_key` STRING,
`extra_field` INTEGER,
`lookup_key` STRING,
`lookup_value` STRING
) WITH (
'connector' = 'jdbc',
'url' = 'jdbc:derby:memory:myDB;create=false',
'table-name' = 'TransformedEvents'
), jdbcUrl);
String sqlQuery =
"SELECT\n" +
" IncomingEventsFlink.field_to_use_as_lookup_key, IncomingEventsFlink.extra_field,\n" +
" LookupTableFlink.lookup_key, LookupTableFlink.lookup_value\n" +
"FROM IncomingEventsFlink\n" +
"LEFT JOIN LookupTableFlink FOR SYSTEM_TIME AS OF IncomingEventsFlink.proctime\n" +
"ON (IncomingEventsFlink.field_to_use_as_lookup_key = LookupTableFlink.lookup_key)\n";
Table joinQuery = tableEnv.sqlQuery(sqlQuery);
// This seems to run, return, and complete and doesn't seem to run in continuous/streaming mode.
TableResult tableResult = joinQuery.executeInsert(TransformedEventsFlink);
You can write to a dynamic table by using executeInsert, as in
Table orders = tableEnv.from("Orders");
orders.executeInsert("OutOrders");
The documentation is here.
It's explained here.
code example can be found here:
// get StreamTableEnvironment.
StreamTableEnvironment tableEnv = ...; // see "Create a TableEnvironment" section
// Table with two fields (String name, Integer age)
Table table = ...
// convert the Table into an append DataStream of Row by specifying the class
DataStream<Row> dsRow = tableEnv.toAppendStream(table, Row.class);
// convert the Table into an append DataStream of Tuple2<String, Integer>
// via a TypeInformation
TupleTypeInfo<Tuple2<String, Integer>> tupleType = new TupleTypeInfo<>(
Types.STRING(),
Types.INT());
DataStream<Tuple2<String, Integer>> dsTuple =
tableEnv.toAppendStream(table, tupleType);
// convert the Table into a retract DataStream of Row.
// A retract stream of type X is a DataStream<Tuple2<Boolean, X>>.
// The boolean field indicates the type of the change.
// True is INSERT, false is DELETE.
DataStream<Tuple2<Boolean, Row>> retractStream =
tableEnv.toRetractStream(table, Row.class);

Removing Blank Days from Power BI Chart

I have created a weekly request measure like so :
RequestsWeekly = var result= CALCULATE(
DISTINCTCOUNTNOBLANK(SessionRequests[RequestDateTime]),
FILTER('Date','Date'[WeekDate]=SELECTEDVALUE('DateSelector'[WeekDate],MAX('DateSelector'[WeekDate]))))+0
RETURN
IF ( NOT ISBLANK ( result ), result)
DateSelector is a standalone table (not connected to any other table in data model) that I have created for all the dates for a dropdown menu select for a Power BI Dasbboard. Unfortunately as there are less dates in the Date Selector table than the Date table, I get ...
Date table is the standard DATE table full of dates from 1970 to 2038. Date connects to Session Requests via a many to one relationships, single way filter. Session Requests is the main fact table.
I need to get rid of the blank row in my result set via DAX so it does not appear in my chart on the X axis. I have tried lots of different DAX combos like blank () and NOT ISBLANK. Do I need to create a table for the result set and then try to filter out the blank day there?
You should not check if the result is empty but if the VALUE ( Table[DayNameShort] ) exists for your current row context:
RequestsWeekly =
VAR result =
CALCULATE (
DISTINCTCOUNTNOBLANK ( SessionRequests[RequestDateTime] ),
FILTER (
'Date',
'Date'[WeekDate]
= SELECTEDVALUE (
'DateSelector'[WeekDate],
MAX ( 'DateSelector'[WeekDate] )
)
)
) + 0
RETURN
IF (
NOT ISBLANK (
VALUE ( Table[DayNameShort] ) -- put here correct table name
),
result
)

Cassandra counter type in User defined type

I have a type and a table
CREATE TYPE IF NOT EXISTS info(
street text,
c_counter counter
);
CREATE TABLE customer (
id UUID PRIMARY KEY,
customer_info info
);
which's way to use the counter on customer table within the UDT?
I tried with this
UPDATE customer SET customer_info.c_counter = customer_info.c_counter + 1 WHERE id = 6ab09bec-e68e-48d9-a5f8-97e6fb4c9b47;
and I got the error:
SyntaxException: <ErrorMessage code=2000 [Syntax error in CQL query] message="line xxx no viable alternative at input '+' (...customer SET customer_info.c_counter = [customer_info].c_counter...)">
Thank you so much guys.

How to reconfigure path to the data files in Clarion 5 IDE?

There is a problem, the system is written in Clarion 5 came from the past and now it needs to be rewrite in Java.
To do this I need to deal with its current state and how it works.
I'm generate the executable file via Application Generator (\*.APP-> \*.CLW -> \*.EXE, \*.DLL).
But when I run it I get the message:
File(\...\...\DAT.TPS) could not be opened. Error: Path Not Found(3). Press OK to end this application
And then - halt, File Access Error
In what may be the problem? Is it possible in the Clarion 5 IDE to reconfigure the path to the data files?
Generally, Clarion uses a data dictionary (DCT) as the center of persistent data (files) that will be used by the program. There are other ways you can define a table, but since you mentioned you compile from the APP, I'm concluding that your APP is linked to a DCT.
In the DCT you have the declarations for every file your application will use. In the file declaration you can inform both logic and disk file name. The error message says you have a problem in the definition of the disk file name.
The Clarion language separates the logic data structure definition from its disk file. A "file" for a Clarion programm, is a complex data structure, which conforms to the following:
structName FILE, DRIVER( 'driverType' ), NAME( 'diskFileName' )
key KEY( keyName )
index INDEX( indexName )
recordName RECORD
field DATATYPE
.
.
END
END
The above is the basic declaration syntax, and a real example would be like:
orders FILE, DRIVER( 'TopSpeed' ), NAME( 'sales.dat\orders' )
ordersPK KEY( id ), PRIMARY
customerK INDEX( customerID )
notes MEMO( 4096 )
RECORD RECORD
id LONG
customerID LONG
datePlaced DATE
status STRING( 1 )
END
END
orderItems FILE, DRIVER( 'TopSpeed' ), NAME( 'sales.dat\items' )
itemsPK KEY( orderID, id ), PRIMARY
RECORD RECORD
orderID LONG
id LONG
productID LONG
quantityOrdered DECIMAL( 10, 2 )
unitPrice DECIMAL( 10, 2 )
END
END
Now, with the above two declarations, I have two logic files that reside in the same disk file. This is a capability offered for some file drivers, like the TopSpeed file driver. It is up to the system designer to decide if, and which files will reside in the same disk file, and I can talk about that on another post, if it is the case.
For now, the problem may be arising from the fact that you probably didn't change the NAME property of the file declaration, and the driver you're using doesn't support multi-file storage.
Here's a revised file definition for the same case above, but targeting a SQL database.
szDBConn CSTRING( 1024 ) ! //Connection string to DB server
orders FILE, DRIVER( 'ODBC' ), NAME( 'orders' ), OWNER( szDBconn )
ordersPK KEY( id ), PRIMARY
customerK INDEX( customerID )
notes MEMO( 4096 ), NAME( 'notes' )
RECORD RECORD
id LONG, NAME( 'id | READONLY' )
customerID LONG
datePlaced DATE
status STRING( 1 )
END
END
orderItems FILE, DRIVER( 'ODBC' ), NAME( 'order_items' ), OWNER( szDBconn )
itemsPK KEY( orderID, id ), PRIMARY
RECORD RECORD
orderID LONG
id LONG
productID LONG
quantityOrdered DECIMAL( 10, 2 )
unitPrice DECIMAL( 10, 2 )
END
END
Now, if you pay attention, you'll notice the presence of a szDBconn variable declaration, which is referenced at the file declarations. This is necessary to inform the Clarion file driver system what to pass the ODBC manager in order to connect to the dabase. Check Connection Strings for plenty of connection strings examples.
Check the DCT definitions of your files to see if they reflect what the driver expects.
Also, be aware that Clarion does allow mixing different file drivers to be used by the same program. Thus, you can adapt an existing program to use an external data source if needed.
Here is a complete Clarion program to transfer information from an ISAM file to a DBMS.
PROGRAM
MAP
END
INCLUDE( 'equates.clw' ) ! //Include common definitions
szDBconn CSTRING( 1024 )
inputFile FILE, DRIVER( 'dBase3' )
RECORD RECORD
id LONG
name STRING( 50 )
END
END
outuputFile FILE, DRIVER( 'ODBC' ), NAME( 'import.newcustomers' ), |
OWNER( szDBconn )
RECORD RECORD
id LONG
name STRING( 50 )
backendImportedColumn STRING( 8 )
imported GROUP, OVER( backendImportedColumn )
date DATE
time TIME
END
processed CHAR( 1 )
END
END
CODE
IF NOT EXISTS( COMMAND( 1 ) )
MESSAGE( 'File ' & COMMAND( 1 ) & ' doesn''t exist' )
RETURN
END
imputFile{ PROP:Name } = COMMAND( 1 )
OPEN( inputFile, 42h )
IF ERRORCODE()
MESSAGE( 'Error openning file ' & inputFile{ PROP:Name } )
RETURN
END
szDBconn = 'Driver={{PostgreSQL ANSI};Server=192.168.0.1;Database=test;' & |
'Uid=me;Pwd=plaintextpassword'
OPEN( outputFile, 42h )
IF ERRORCODE()
MESSAGE( 'Error openning import table: ' & FILEERROR() )
RETURN
END
! // Lets stuff the information thatll be used for every record
outputFile.imported.date = TODAY()
outputFile.imported.time = CLOCK()
outputFile.processed = 'N'
! //arm sequential ISAM file scan
SET( inputFile, 1 )
LOOP UNTIL EOF( inputFile )
NEXT( inputFile )
outputFile.id = inputFile.id
outputFile.name = input.name
ADD( outputFile )
END
BEEP( BEEP:SystemExclamation )
MESSAGE( 'File importing completed' )
Well, this example program serves only the purpose of showing how the different elements of the program should be used. I didn't use a window to let the user track the progress, and used Clarion's primitives, like ADD(), which work for sure, but inside a loop can represent a drag in performance.
Much better would be to encapsulate the entire reading in a transaction opened with outputFile{ PROP:SQ } = 'BEGIN TRANSACTION' and at the end, issue a outputFile{ PROP:SQL } = 'COMMIT'.
Yes, trhough the PROP:SQL one can issue ANY command accepted by the server, including a DROP DATABASE, so it is very powerful. Use with care.
Gustavo

Grails keep deleting my tables

I have my table structure like:
CREATE TABLE test_two_tabel.T1 ( T1_ID INT NOT NULL AUTO_INCREMENT , A1 INT NULL , B1 VARCHAR(45) NULL , C1 VARCHAR(45) NULL , D1 DATETIME NULL , PRIMARY KEY (T1_ID) );
In Grails:
package twotables
class T1 {
Integer a
String b
String c
Date d
static mapping = {
table "T1"
version false
id column:"T1_ID"
a1 column:"a1"
b1 column:"b1"
c1 column:"c1"
d1 column:"d1"
}
static constraints = {
id()
a1()
b1()
c1()
d1()
}
}
Every time I execute my program... Grails deletes my tables in the DB, does anyone know what's happening?
You need to change value of dbCreate from 'create-drop' to 'update' at grails-app/conf/DataSource.groovy
You current value probably is:
development {
dataSource {
dbCreate = "create-drop" // one of 'create', 'create-drop','update'
url = "***"
}
}
this means that Grails will recreate all tables on every restart. If you'll set this as update it will try to update table structure, according to your data model classes.
You can read more about Grails DB configuration at http://www.grails.org/doc/latest/guide/3.%20Configuration.html#3.3%20The%20DataSource
it could be a few things. As #splix mentioned, it could be the 'create-drop' settings.
Also, if you never changed your datasource, Grails uses an in-memory database, so it only lasts as long as the program runs. You can tell hsqldb to persist to file instead of be in memory. You can also change it to point to something like mysql. Look here.

Resources