Knex migrations with Electron and sqlite? - reactjs

I am using sqlite as my database for an offline app which is made in electron.
For creating the database, I was using knex migrations.
Problem is, it will run fine in development, i would migrate the database and start the electron process.
But while packaging the app for production build, I need the migrations to run on the client machine on the first start up. So that the database would be created and when there is an application update, a new migration would keep the database updated.
What is the appropriate approach for this. How do i run the migrations on app start up, or how do i keep the migrations in the bundle.
Won't all the code be kept in app.asar? Will the migration code be run from there?
Also, where should the database be created in the client machine?

If you are using electron builder then you can add this to the electron-builder.json
"extraFiles": "migrations/*",
where migrations is the folder where you keep the migrations.
To migrate it automatically on running
you can add the following code
const client = knex(config[env]);
client.migrate.latest(config);

Related

How to get database off of localhost and running permanently?

So not sure it this is stupid to ask, but I'm running a neo4j database server (using Apollo server) from my React Application. Currently, I run it using node in a separate terminal (and I can navigate to it on localhost), then run npm start in a different terminal to get my application going. How can I get the database just up and running always, so if customers use the product they can always access the database? Or, if this isn't good practice, how can I establish the database connection while I run my client code?
Technologies being used: ReactJS, Neo4j Database, GraphQL + urql
I tried moving the Apollo server code into the App.tsx file of my application to run it from there directly when my app is launched, but this was giving me errors. I'm not sure if this is the proper way to do it, as I think it should be abstracted out of the client code?
If you want to run your server in the cloud so that customers can access your React application you need two things:
one server/service to run your database, e.g. Neo4j AuraDB (Free/Pro) or other Cloud Marketplaces https://neo4j.com/docs/operations-manual/current/cloud-deployments/
A service to run your react application, e.g. netlify, vercel or one of the cloud providers (GCP, AWS, Azure) that you then have to configure with the server URL + credentials of your Neo4j server
You can run neo4j-admin dump --to database.dump on your local instance to create a copy of your database content and upload it to the cloud service. For 5.x the syntax is different, I think neo4j-admin database dump --path folder.

How to solve liquibase waiting for changelog lock problem in several pods in OpenShift cluster?

We are supporting several microservices written in Java using Spring Boot and deployed in OpenShift. Some microservices communicate with databases. We often run a single microservice in multiple pods in a single deployment. When each microservice starts, it starts liquibase, which tries to update the database. The problem is that sometimes one pod fails while waiting for the changelog lock.
When this happens in our production OpenShift cluster, we expect other pods to fail while restarting because of the same problem with changelog lock issue. So, in the worst case scenario, all pods will wait for the lock to be lifted.
We want liquidbase to automatically prepare our database schemas when each pod is starting.
Is it good to store this logic in every microservice? How can we automatically solve the problem when the liquidbase changelog lock problem appears? Do we need to put the database preparation logic in a separate deployment?
So maybe I should paraphrase my question. What is the best way to run db migration in term of microservice architecture? Maybe we should not use db migration in each pod? Maybe it is better to do it with separate deployment or do it with some extra Jenkins job not in OpenShift at all?
We're running liquibase migrations as an init-container in Kubernetes. The problem with running Liquibase in micro-services is that Kubernetes will terminate the pod if the readiness probe is not successful before the configured timeout. In our case this happened sometimes during large DB migrations, which could take a few minutes to complete. Kubernetes will terminate the pod, leaving DATABASECHANGELOGLOCK in a locked state. With init-containers you will not have this problem. See https://www.liquibase.org/blog/using-liquibase-in-kubernetes for a detailed explanation.
UPDATE
Please take a look at this Liquibase extension, which replaces the StandardLockService, by using database locks: https://github.com/blagerweij/liquibase-sessionlock
This extension uses MySQL or Postgres user lock statements, which are automatically released when the database connection is closed (e.g. when the container is stopped unexpectedly). The only thing required to use the extension is to add a dependency to the library. Liquibase will automatically detect the improved LockService.
I'm not the author of the library, but I stumbled upon the library when I was searching for a solution. I helped the author by releasing the library to Maven central. Currently supports MySQL and PostgreSQL, but should be fairly easy to support other RDBMS.
When Liquibase kicks in during the spring-boot app deployment, it performs (on a very high level) the following steps:
lock the database (create a record in databasechangeloglock)
execute changeLogs;
remove database lock;
So if you interrupt application deployment while Liquibase is between steps 1 and 3, then your database will remain locked. So when you'll try to redeploy your app, Liquibase will fail, because it will treat your database as locked.
So you have to unlock the database before deploying the app again.
There are two options that I'm aware of:
Clear databasechangeloglock table or set locked to false. Which is DELETE FROM databasechangeloglock or UPDATE databasechangeloglock SET locked=0
Execute liquibase releaseLocks command. You can find documentation about it here and here.
We managed to solve this in my company by following also the same approach Liquibase suggests with Init Containers, but instead of using a new container and run the Liquibase migration via Liquibase CLI, we are reusing the existing Spring Boot service setup but just executing the Liquibase logic. We have created an alternative main class that can be used in an entrypoint to populate the database using Liquibase.
The InitContainerApplication class brings the minimal configuration required to start the application and set up Liquibase.
Typical usage:
entrypoint: "java -cp /app/extras/*:/app/WEB-INF/classes:/app/WEB-INF/lib/* com.backbase.buildingblocks.auxiliaryconfig.InitContainerApplication"
Here the class
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.ApplicationArguments;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.SpringBootConfiguration;
import org.springframework.boot.autoconfigure.ImportAutoConfiguration;
import org.springframework.context.ApplicationContext;
#SpringBootConfiguration
#ImportAutoConfiguration(InitContainerAutoConfigurationSelector.class)
public class InitContainerApplication implements ApplicationRunner {
#Autowired
private ApplicationContext appContext;
public static void main(String[] args) {
SpringApplication.run(InitContainerApplication.class, args);
}
#Override
public void run(ApplicationArguments args) throws Exception {
SpringApplication.exit(appContext, () -> 0);
}
}
Here is the use as an Init Container:
spec:
initContainers:
- name: init-liquibase
command: ['java']
args: ['-cp', '/app/extras/*:/app/WEB-INF/classes:/app/WEB-INF/lib/*',
'com.backbase.buildingblocks.auxiliaryconfig.InitContainerApplication']
Finally we solved this problem in another project by removing liquibase migration at microservice start time. Now separate Jenkins job apply the migration and separate Jenkins job deploy and start microservice after migration apply. So now microservice itself doesn’t apply database update
I encountered this issue when one of the Java applications I manage abruptly shut down.
The logs were displaying the error below when the application tries to start:
waiting to acquire changelock
Here's how I solved it
I fixed this issue by:
Stopping the application
Deleting the databasechangelog and databasechangelog.lock files in the database connected to the application.
Restarting the application
In my case the application was connected to 2 databases. I had to delete the databasechangelog and databasechangelog.lock files in the both databases and then restarted the application. The both database databasechangelog and databasechangelog.lock files have to be at sync.
After this the application was able to acquire changelock file.

Bitbucket and Database Development

I have a Windows server with MS SQL Server running on it.
On the SQL Server developers have created stored procedures, views, tables, triggers.
On the Windows server developers created shell scripts.
I would like to start versioning the code described above in a BitBucket repository. I have a repository created in BitBucket.
How should the branches be organized in this repository? i.e. "SQL Server\Database\\ ...
"Windows Server\\shell_script" ...
Can I connect BitBucket to SQL Server and Windows Server and specify which code needs to be versioned?
Are both 1 and 2 options above possible?
I just need to version control the changes to the code and have the ability to mark under which project the code change was made.
I am new to BitBucket. I am using the web front end of it. I do not know how to configure command line access, so please try not to reference Bitbucket commands. Sorry if I sound confusing.
Please help.
I know this is an old question but anyway, in principle I'd recommend:
Put all the server shell scripts into one place and make that a git repo linked to your bitbucket repo
Add a server shell script to export what you want version controlled from the SQL db
The export from the SQL db should be to text files so they are easily 'diffable'
You might as well make the export to a sub-directory within the shell scripts repo so that everything is in one place and can't get out of sync
So you only have one branch, not a separate one for server shell scripts and db
Make sure people run the export script and then commit everything when they make a change
You ideally have a test server which means you'd want a way to push changes from the repo into the SQL db. I presume you can do this with a script but deleting the server setup and re-creating it from the text files.
So basically, you can't connect an SQL db to bitbucket directly. You need scripts to read and write to the db from a repo.

Electron - How to setup db with sqlite in Windows

I have created an electron app, and built it with electron-builder. It creates a package in the dist folder, which I am able to install and then run the resulting application.
I have a sqlite database in the root folder of my project, with some data in it. But when I package and then run the exe file, it seems not to connect to the database or it appears empty. If I simply run the project with electron without packing, it is able to connect to the database and make use of the data.
Also, if visit the installation folder, there I find a copy of the database I had in my application but without any rows in it. Inside an .asar folder, there is a database populated as I would want but this one I supposedly cannot edit.
Would you have any pointers on what could be causing this? How can I properly connect to the database I have in the root folder of my project using sqlite, sequelize, windows and electron?
Thanks in advance
Ensure that electron-builder doesn't pack the database file into the app ASAR (use the asarUnpack option).
If your packaged app needs to modify the database then have it copy the file to the location returned by app.getPath('userData') and work with that copy. Your app generally does not have permission to write to the directory in which it is installed.

What is the common practice to create db schema in Cloud Foundry?

I have been quested for a while for a best practice to initialize the relational database schema and pre-populated data.
There are a couple of ways to make it happen:
Install the cf-ex-phpmyadmin and import the data and schema thru it
Use the VMC cli tool to create a tunnel the service from this link
If using ruby or python, use the db migration command in the manifest.yml. However, it will be executed on each instance and every time the instance re-stages.
Which one is commonly used and most effective?
VMC is very old and is no longer supported. I'd be surprised if it even works against a Cloud Foundry installation that has been deployed within the last couple years. You should use the new cf CLI.
If you were to put the command in your manifest, you could avoid having it run on every instance if you had a conditional guard that would only run the migrations if $CF_INSTANCE_INDEX equals 0, however it's not always a great idea to run migrations in your start command, since there is a hard timeout on your start command, and you don't want migrations to be interrupted if they are long migrations.
A good suggestion I've heard [1] is that migrations should be handled as a separate part of your deploy process, either by cf ssh or running them locally, pointed at the URL and credentials of your database service instance.
[1] credit to Travis Grathwell for this suggestion.

Resources