Prisma - is it possible to seed unsupported data types? - postgis

I'm new to prisma and as I understood it's only possible to work with unsupported (i.e. postgis geometry) data using raw queries.
But is it possible to use prisma seed with unsupported types? Maybe there's some option to use raw queries there as well?
I need to add some test data with coordinates to the database when building the project.

Yes, you can use Prisma seed with unsupported data types.
You can execute a Bash script seed.sh to seed your database through plain SQL, and you can write the seeding SQL Queries in the .sql file.
Example:
#!/bin/sh
# -e Exit immediately when a command returns a non-zero status.
# -x Print commands before they are executed
set -ex
# Seeding command
psql file.sql
In this example, you can write the SQL Queries for seeding which contain unsupported data types in file.sql file.

Related

Is there a way to run a db2 command without logging into the console?

I know that for mysql if I run:
mysql (db name) -e "select x from table;"
from the linux command line i can retrieve the result. So I don't have to connect to the db
I'm trying to find a way to do it for db2. Everything I find indicates I'd need to do it from a bash script, which I don't want to do because I'd have to deploy that script to 700+ servers.
Is there a method that's like the mysql function that allows for this for db2?

Do not drop and create database when scripting drop/create database objects and data with mssql-scripter

I'm trying to script the database objects and data of my database to later move it to a server where I don't have backup/restore rights. Instead I'm using the Generate Scripts method and I use mssql-scripter to generate the scripts.
I have a .bat file with the following script code to generate my SQL script file.
set
timevar=%date:~4,2%%date:~7,2%%date:~10,4%-%time:~0,2%%time:~3,2%%time:~6,2%
mssql-scripter --server 10.100.8.8 -d Dev_db -f .\%timevar%.sql
--schema-and-data --script-drop-create --target-server-version 2016 --target-server-edition Standard --check-for-existence --include-dependencies --constraint-names --collation -U ScriptingUser -P 1234 --exclude-use-database
The problem is that it's also scripting DROP DATABASE and CREATE DATABASE, which I don't want. I would only like to DROP and CREATE database objects and later populate tables with the scripted data.
Has anyone faced this problem and have you found a solution?
After fiddling around with the options for longer, I managed to find the right parameter and work-around to solve my problem.
The exact code that I ran is:
set
timevar=%date:~4,2%%date:~7,2%%date:~10,4%-%time:~0,2%%time:~3,2%%time:~6,2%
mssql-scripter --server 10.100.8.8 -d Dev_db -f .\%timevar%.sql --schema-and-data
--script-drop-create --target-server-version 2016 --target-server-edition Standard --check-for-existence --constraint-names --collation -U ScriptingUser -P 1234 --exclude-use-database --include-objects "dbo." --display-progress
The key change I added the --include-objects parameter, with a twist. The way I changed by scripts is by adding code snippet:
--include-objects "dbo."
This tells mssql-scripter to only script out objects that contain the "dbo." keyword(substring) in the fully qualified name.
Also I remove this parameter from my initial command:
--include-dependencies
since I script out everything in my database under the dbo schema.
This scripts out:
all of the objects in my database
it includes a IF EXISTS check
it issues a DROP query to drop the existing
it issues a CREATE query to create the new one
it issues multiple INSERT statements to also populate the database with data

Postgres :script to copy schema from internal server to deployment server ; without entering passwords at every step

I want to copy my database schema (just schema ;not data) from internal server to external server.
The problem I am facing is entering passwords at every step. Even though the steps to copy are pretty simple, I am not able to generate a script to automate teh whole process.
What I have till now:
on internal server:
pg_dump -C -s --file=schema.txt {name}
scp schema.txt prakhar#{external server}:/home/prakhar
on external server:
dropdb {name}
createdb {name}
psql --file=schema.txt {name}
At each step I am prompted for password.
I want to do two things:
1: Run the script from external server to fetch schema from internal ; or the other way around
2: Incorporate the password for both internal and external servers in a way the the script takes care of it for me.
I would recommend wrapping those commands in bash scripts, and in each one, prior to running the commands, add the following line:
export PGPASSWORD=<password>
Where is the password you want to use. This will export it as an environment variable which is available to the Postgres commands.
Here are other methods, including PGPASSWORD, to specify the Postgres password.
For *nix commands like scp, there are other options. One is sshpass. That would work well if you wanted to keep this all as a shell script.
Another option, and the one I would probably use for this sort of thing, would be to scrap the shell script wrapper and instead use something like Python's Fabric.
You can run commands using sudo, as well as commands on remote machines, as well as shell commands like the Postgres utility programs (you would want to set PGPASSWORD in the environment hash within Fabric for that).

Django sqlite - how to change schema

I have made changes to my model.py in Django and now I want to syncronize these changes. It's fine to delete the tables and re-create them. However, nothing seems to work. I am using sqlite3:
syncdb: only works first time, not with changes
"python manage.py sql my_site", followed by syncdb: I thought this would 'redo' it all, but the table still only has the old columns (or so I assume as I get an error when I try to access the table using my model).
Then I figure that I can access the database directly and delete the tables that way. However, I don't know how to get "in" to the DB where I can execute commands. Typing sqlite3 from the command prompt is not recognized. I also tried "python manage.py sql my_site", but I again get the message that sqlite3 is not recognized.
Suggestions?
First you have to install the command line tool for sqlite. On Ubuntu/Debian, you can simply do
sudo apt-get install sqlite3
On windows, you can download it from here: http://www.sqlite.org/download.html. Look for the one that looks like sqlite-shell-win32-xxx.zip.
Use it like this:
> sqlite3 /path/to/your/database
;show some help
.help
; list all databases
.databases
; clear the contents of a table
DELETE FROM <tablename>;
See also the command line reference: http://www.sqlite.org/sqlite.html
and the sqlite SQL reference: http://www.sqlite.org/lang.html.
Using the "ALTER TABLE" sql command, you can also add columns without deleting the entire contents of the table. To do this, compare the output of .schema in sqlite3, and the output of manage.py sql my_site to find out which columns you need to add.
An example:
ALTER TABLE "buildreport_series" ADD COLUMN "parameters" text
Use Django's built in database management tool:
python manage.py dbshell
And issue the required sql commands. The sql command will only print to stdout what the required sql is to create the current tables (as defined by the current models).

How to import data from a view in another database (in another server) into a table in SQL Server 2000?

I was thinking about using bcp command to solve the user authentication, but does a bcp command capable to import to a table in my database? By the way, I am using SQL Server 2000 environment.
Here's the code I have got so far:
SET #Command = 'bcp "SELECT vwTest.* from [myserver\sql].test.dbo.vwTest" queryout dbo.Test -C ACP -c -r \n -t ";" -S myserver\sql -Umyuser -Puser1'
EXEC master.dbo.xp_cmdshell #Command
Based on the comparison of BCP, BULK INSERT, OPENROWSET (infer Linked Server) here:
...the bcp utility runs out-of-process. To move data across process memory spaces, bcp must use inter-process data marshaling. Inter-process data marshaling is the process of converting parameters of a method call into a stream of bytes. This can add significant load to the processor. However, because bcp [both] parses the data and [converts the] data into [the] native storage format in the client process, they can offload parsing and data conversion from the SQL Server process.
...bcp possibly isn't the most efficient means of transferring data. You might be better off to:
Create a linked server instance to the other database
Use INSERT statements, so that the tables are populated based on records from the database exposed in the linked server instance.
Besides potentially being more efficient, you only need to setup the linked server instance once versus running BCP to create output scripts every time you want to move data.
Mind that the linked server instance is based on a user on the other database, so permissions to the other database are based on that users' permissions.
SURE !!
Use this command (adopt it for your needs) on your source machine:
bcp database.dbo.viewname out c:\temp\viewname.bcp
and then import the data back into your destination system using:
bcp newdatabase.dbo.importtable in c:\temp\viewname.bcp
-c -S(servername) -U(username) -P(password)
That should grab the contents of your "viewname" from the source server, put it in a temporary file, and insert that file back into the new database on the new server.
Typically, you would load those data rows into a new, temporary staging table, and form there, use T-SQL or other means to insert that data into your actual tables.
Check out the MSDN documentation on bcp in SQL Server 2000 for details on all those switches and their meanings.

Resources