Multiple database schema for different database in spring boot - sql-server

I have spring boot application that need to be deployed on PCF. I want to use H2 database for local testing but after deployed in PCF I will be using SQL server. I have a schema that need to be used for each database. So, I have two schema.sql file one for H2 and another for SQL server. How can I tell spring for local profile schema-H2.sql need to be used and for profile cloud schema-sqlserver.sql need to be used.

You can set the spring.datasource.platform to differentiate the schema and data sql file.
eg,
spring.datasource.platform=h2
then the file name should be data-h2.sql and schema-h2.sql
Make sure you set spring.datasource.initialization-mode=always

Related

H2 in memory import mv database programatically

I have a H2 h2.mv.db database physical file in "users" directory of my machine. I want my application using Spring Boot to use H2 in memory database, but when the app start I want the in memory database to import the h2.mv.db file. I know I can export a SQL script then execute it manually using H2 web application, but can this be achieved programatically?
You can configure the spring.datasource.url to point to the file.
Another approach to this problem is to export the SQL script as you said and put it inside a file called src/main/resource/data.sql.
Spring Boot automatically picks up the data.sql file and run it against the H2 database during the application startup.
#persist the data
spring.datasource.url=jdbc:h2:file:/data/sampledata
spring.datasource.url=jdbc:h2:C:/data/sampledata
Source: https://www.javatpoint.com/spring-boot-h2-database

how to mirror a whole database cluster in postgresql

I'm using a postgresql (9.6) database in my project which is currently in development stage.
For production I want to use an exact copy/mirror of the database-cluster with a slightly different name.
I am aware of the fact that I can make a backup and restore it under a different cluster-name, but is there something like a mirror function via the psql client or pgAdmin (v.4) that mirrors all my schemas and tables and puts it in a new clustername?
In PostgreSQL you can use any existing database (which needs to be idle in order for this to work) on the server as a template when you want to create a new database with that content. You can use the following SQL statement:
CREATE DATABASE newdb WITH TEMPLATE someDbName OWNER dbuser;
But you need to make sure no user is currently connected or using that database - otherwise you will get following error.
ERROR: source database "someDbName" is being accessed by other users
Hope that helped ;)

Why does Hibernate not find the default schema defined in application.properties?

I am currently trying to configure my Spring Boot application so it can talk with my H2 database through Hibernate. As far as I know, the application manages to succesfully connect to the database but it is then unable to find the default database schema as defined in my application.properties file. I have already tried manually creating the database schema through RazorSQL but this did not work.
application.properties:
spring.datasource.url=jdbc:h2:~/dndmp;AUTO_SERVER=TRUE
spring.datasource.platform=h2
spring.datasource.continue-on-error=true
spring.datasource.username=DNDMP
spring.datasource.password=tttt
spring.jpa.properties.hibernate.default_schema=dndmp
spring.jpa.properties.hibernate.ddl-auto=create-drop
spring.jpa.properties.hibernate.show-sql=true
spring.jpa.properties.hibernate.format_sql=true
Use:
spring.jpa.hibernate.ddl-auto=update

How to upload files and store them in a server local path when MS SQL SERVER allows remote connections?

I am developing a win32 windows application with Delphi and MS SQL Server. it works fine in LAN but I am trying to add the support for SQL Server remote connections (= working with a DB that can be accessed with an external IP, as described in this article: http://support.microsoft.com/default.aspx?scid=kb;EN-US;914277).
Basically I have a Table in DB where I keep the DocumentID, the document description and the Document path (like \\FILESERVER\MyApplicationDocuments\45.zip).
Of course \\FILESERVER is a local (LAN) path for the server but not for the client (as I am now trying to add the support for remote connections).
So I need a way to access \\FILESERVER even if of course I cannot see it in LAN.
I found the following T-SQL code snippet that is perfect for the "download trick":
SELECT BulkColumn as MyFile FROM OPENROWSET(BULK '\FILESERVER\MyApplicationDocuments\45.zip' , SINGLE_BLOB) AS X
With the code above I can download a file on the client.
But how to upload it? I need an "Uppload trick" to be able to insert new files, but also to delete or replace existing files.
Can anyone suggest? If a trick is not available could you suggest an alternative? Like an extended stored procedure or calling some .net assembly from the server.
If you have sql 2008, then you can use FILESTREAM, then sql server will automatically throw it out to disk.
If you have sql 2005, I'd consider just moving the data into a varbinary(max) column and deal with it that way (also pretty simple).
If neither of those apply OR you can't shove it into a varbinary column, then I would not use sql server to handle the actual file contents and instead just have a web service which stored the file on the file system or a SAN that the web service can easily access. (same as IMHO)
UPDATE:
One other idea that crossed my mind. If you are using SQL 2005/08 then you can write a CLR Stored procedure in .Net. This could handle transferring the blob data to / from the local file system.
In ideal world I would create simple:
- ASP.NET Web Service
- or .Net Remoting Service (faster than web service)
- or new .Net 4.0 RIA service.
Deploy it to the SQL Server on custom TCP/IP port
This service would listen to the port and client would request the file via the service. The service would get the file via local LAN and communicate with the DB via local OLE DB connection.
I would not use any SQl Server "web service" support - this is security and performance issues.
UPDATE:
Since this is Delphi app - you can do the same using Delphi, even though above solution still valid, but more work to integrate different technologies. Delphi has its own tools to build remote applications
If you are on 2005, you could try to store file in temp blob field of some temp table, and then call stored procedure which should put the file where you want it, and update path field as you want it.
In that stored procedure you must use extended stored procedures (xp_something), which allow access to file system. That means that those should be enabled for sql server.
BTW You are trying to use relational DB as Document database. That will, sooner or later, backfire.

Uploading data into remote database

What is the most secure and easier way to send approx. 1000 different records into database that is not directly accessible - MySQL database on Web provider's server - using Windows application
.
Data will be stored into different tables.
Edited:
The application will be distributed to users who have no idea what is database or putty or... They just install my application, open it, enter some data and press Submit.
Currently I'm using php to upload the generated script into webserver and there process it. I think I should also include some signature to the file to avoid some "drop..." hacks.
If you can export the data as a sql script you can just run it against the remote server using your application of choice. 1000 records wont create that big a script.
In current project on my job we have the same situation - remote (faraway) database.
I made next solution: serialization sql query into xml and putting it via HTTP to web daemon, which is running on remote server instead of open sql server. Daemon checks credentials and executes query.
As I can't execute any external programs on external server, I created following solution:
My program creates script file and calculates it's salted hash
Program sends this file together with user credentials and hash into PHP page on the server
PHP page checks the username and password, then checks hash and then executes script. Only Insert and Update commands are allowed.
Is this approach secure enough?

Resources