Get ERROR: kind server_data NIY - database

I wrote a bash script to curl an api, then insert the json data into a table.
json= ${curl -s user username passwd "api url here")
psql --user username pass -c "INSERT INTO testdb (kind, data) VALUES ('serve_data', '${json}');"
then it says ERROR: kind server_data NIY
the columns for testdb are
kind text
data json

It turns out that we did not define the function which is supposed to be called when kind is "server_data"

Related

SQL Server and sqlcmd cant insert a new record

I'm using an Ubuntu server with SQL Server installed and fail2ban for securing my services from brute force with some along securities.
So the problem is this: I created a bash script that takes the arguments from fail2ban IP, jail-name etc and creates a record to SQL Server via sqlcmd. When I'm testing the script it works perfectly.
But when it comes to real values and more it takes a wrong turn.
The bash script is this:
${sqlcmd} = full path of sqlcmd
${CMD_SQL} = "${sqlcmd} -S localhost -U randomuser -P randompassword(reading it from a file using grep) -Q "
#...
#code for checking some stuff and creating variables
#...
${CMD_SQL} <<EOF "INSERT INTO ${DB_TABLE} (ip, ports, protocol, jail,hostname, country, rdns, timestamp,failures, loglines) VALUES ('${_ip}', '${_ports}', '${_protocol}', '${_jail}','${_hostname}', '${_country}', '${_rdns}', CURRENT_TIMESTAMP,'${_failures}', '${_loglines}');"
EOF
When the script run from fail2ban service i saw on the logs the real values which it didnt insert them into sql. I tried to run them by myself.
sudo /opt/mssql-tools/bin/sqlcmd -S localhost -U randomuser -P randomPass -Q INSERT INTO dbo.banned (ip, ports, protocol, jail,hostname, country, rdns, timestamp,failures, loglines) VALUES ('12.12.12.12', 'All-ports', 'tcp', 'mssqld','djasserver', 'SA, Saudi Arabia', '', CURRENT_TIMESTAMP,'5', '2022-03-12 04:17:10.20 Logon Login failed for user 'sa'. Reason: Could not find a login matching the name provided. [CLIENT: 12.12.12.12] 2022-03-12 04:17:10.54 Logon Login failed for user 'sa'. Reason: Could not find a login matching the name provided. [CLIENT: 12.12.12.12] 2022-03-12 04:17:10.89 Logon Login failed for user 'sa'. Reason: Could not find a login matching the name provided. [CLIENT: 12.12.12.12] 2022-03-12 04:17:11.22 Logon Login failed for user 'sa'. Reason: Could not find a login matching the name provided. [CLIENT: 12.12.12.12] 2022-03-12 04:17:11.56 Logon Login failed for user 'sa'. Reason: Could not find a login matching the name provided. [CLIENT: 12.12.12.12]');
So at first I got an error:
-bash: syntax error near unexpected token `('
and I thought to myself that the query must be in quotation marks and I added this:
${sqlcmd} = full path of sqlcmd
${CMD_SQL} = "${sqlcmd} -S localhost -U randomuser -P randompassword(reading it from a file using grep) -Q "
#...
#code for checking some stuff and creating variables
#...
${CMD_SQL} <<EOF "\"INSERT INTO ${DB_TABLE} (ip, ports, protocol, jail,hostname, country, rdns, timestamp,failures, loglines) VALUES ('${_ip}', '${_ports}', '${_protocol}', '${_jail}','${_hostname}', '${_country}', '${_rdns}', CURRENT_TIMESTAMP,'${_failures}', '${_loglines}');\""
EOF
I run again the above real values and got this error from SQL:
Msg 102, Level 15, State 1, Server myserver, Line 1
Incorrect syntax near 'sa'.
And I thought okay it probably cannot pass some escape characters it contains.
The question is how do I fix this SQL error? Thanks for any answer.
PS: of course username, password and ips aren't real I changed them for protecting attackers privacy and mine.

Issues using "-f" flag in CQLSH to run a query.cql file

I'm using cqlsh to add data to Cassandra with the BATCH query and I can load the data with a query using the "-e" flag but not from a file using the "-f" flag. I think that's because the file is local and Cassandra is remote. Details below:
This is a sample of my query (there are more rows to insert, obviously):
BEGIN BATCH;
INSERT INTO keyspace.table (id, field1) VALUES ('1','value1');
INSERT INTO keyspace.table (id, field1) VALUES ('2','value2');
APPLY BATCH;
If I enter the query via the "-e" flag then it works no problem:
>cqlsh -e "BEGIN BATCH; INSERT INTO keyspace.table (id, field1) VALUES ('1','value1'); INSERT INTO keyspace.table (id, field1) VALUES ('2','value2'); APPLY BATCH;" -u username -p password -k keyspace 99.99.99.99
But if I save the query to a text file (query.cql) and call as below, I get the following output:
>cqlsh -f query.cql -u username -p password -k keyspace 99.99.99.99
Using 3 child processes
Starting copy of keyspace.table with columns ['id', 'field1'].
Processed: 0 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s
0 rows imported from 0 files in 0.076 seconds (0 skipped).
Cassandra obviously accepts the command but doesn't read the file, I'm guessing that's because the Cassandra is located on a remote server and the file is located locally. The Cassandra instance I'm using is a managed service with other users, so I don't have access to it to copy files into folders.
How do I run this query on a remote instance of Cassandra where I only have CLI access?
I want to be able to use another tool to build the query.cql file and have a batch job run the command with the "-f" flag but I can't work out how I'm going wrong.
You're executing a local cqlsh client so it should be able to access your local query.cql file.
Try to remove the BEGIN BATCH and APPLY BATCH and just let the 2 INSERT statements in the query.cql and retry again.
One other solution to insert data quickly is to provide a csv file and use the COPY command inside cqlsh. Read this blog post: http://www.datastax.com/dev/blog/new-features-in-cqlsh-copy
Scripting insert by generating one cqlsh -e '...' per line is feasible but it will be horribly slow

how do you delete a column's data from a postgresql database using datamapper and sinatra?

I have a table with three columns (id, name, age). I would like to keep the name and id the same, but remove all age data, and be able to reassign ages.
ie. I want to clear data from one column only, but not delete the entire column.
I am using sinatra, datamapper and postgresql.
Programmatically you could do something like this:
#myvariable = MyModel.all
#myvariable.map {|m|
m.update(:age => nil)
}
More info on Datamapper can be found here: http://datamapper.org/docs/
Or if you want to do it by hand, if your database is on Heroku, you can connect to the database instance on Heroku like so:
heroku pg:psql
Then you'll be connected to the database and you can just type out the SQL like Gus suggested:
update table_name set age = NULL
If your app is not on Heroku and you're just connecting to a local postgresql instance, it would be like so:
psql -d your_database -U your_user
More info on psql can be found here: http://www.postgresql.org/docs/9.2/static/app-psql.html

Unaccent issue when restoring a Postgres database

I want to restore a particular database under another database name to another server as well. So far, so good.
I used this command :
pg_dump -U postgres -F c -O -b -f maindb.dump maindb
to dump the main database on the production server. The I use this command :
pg_restore --verbose -O -l -d restoredb maindb.dump
to restore the database in another database on our test server. It restore mostly ok, but there are some errors, like :
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3595; 1259 213452 INDEX idx_clientnomclient maindbuser
pg_restore: [archiver (db)] could not execute query: ERROR: function unaccent(text) does not exist
LINE 1: SELECT unaccent(lower($1));
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
QUERY: SELECT unaccent(lower($1));
CONTEXT: SQL function "cyunaccent" during inlining
Command was: CREATE INDEX idx_clientnomclient ON client USING btree (public.cyunaccent((lower((nomclient)::text))::character varying));
cyunaccent is a function that is in the public shcema and does gets created with the restore.
After the restore, I am able to re-create those indexs perfecly with the same sql, without any errors.
I've also tried to restore with the -i option of pg_restore to do a single transaction, but it doesn't help.
What am I doing wrong ?
I just found the problem, and I was able to narrow it down to a simple test-case.
CREATE SCHEMA intranet;
CREATE EXTENSION IF NOT EXISTS unaccent WITH SCHEMA public;
SET search_path = public, pg_catalog;
CREATE FUNCTION cyunaccent(character varying) RETURNS character varying
LANGUAGE sql IMMUTABLE
AS $_$ SELECT unaccent(lower($1)); $_$;
SET search_path = intranet, pg_catalog;
CREATE TABLE intranet.client (
codeclient character varying(10) NOT NULL,
noclient character varying(7),
nomclient character varying(200) COLLATE pg_catalog."fr_CA"
);
ALTER TABLE ONLY client ADD CONSTRAINT client_pkey PRIMARY KEY (codeclient);
CREATE INDEX idx_clientnomclient ON client USING btree (public.cyunaccent((lower((nomclient)::text))::character varying));
This test case is from a pg_dump done in plain text.
As you can see, the cyunaccent function is created in the public shcema, as it's later used by other tables in other schema.
psql/pg_restore won't re-create the index, as it cannot find the function, despite the fact that the shcema name is specified to reference it. The problem lies in the
SET search_path = intranet, pg_catalog;
call. Changing it to
SET search_path = intranet, public, pg_catalog;
solves the problem. I've submitted a bug report to postgres about this, not yet in the queue.

Clone MySQL database

I have database on a server with 120 tables.
I want to clone the whole database with a new db name and the copied data.
Is there an efficient way to do this?
$ mysqldump yourFirstDatabase -u user -ppassword > yourDatabase.sql
$ mysql yourSecondDatabase -u user -ppassword < yourDatabase.sql
mysqldump -u <user> --password=<password> <DATABASE_NAME> | mysql -u <user> --password=<password> -h <hostname> <DATABASE_NAME_NEW>
Like accepted answer but without .sql files:
mysqldump sourcedb -u <USERNAME> -p<PASS> | mysql destdb -u <USERNAME> -p<PASS>
In case you use phpMyAdmin
Select the database you wish to copy (by clicking on the database from the phpMyAdmin home screen).
Once inside the database, select the Operations tab.
Scroll down to the section where it says "Copy database to:"
Type in the name of the new database.
Select "structure and data" to copy everything. Alternately, you can select "Structure only" if you want the columns but not the data.
Check the box "CREATE DATABASE before copying" to create a new database.
Check the box "Add AUTO_INCREMENT value."
Click on the Go button to proceed.
There is mysqldbcopy tool from the MySQL Utilities package.
http://dev.mysql.com/doc/mysql-utilities/1.3/en/mysqldbcopy.html
If you want to make sure it is an exact clone, the receiving database needs to be entirely cleared / dropped. This way, the new db only has the tables in your import file and nothing else. Otherwise, your receiving database could retain tables that weren't specified in your import file.
ex from prior answers:
DB1 == tableA, tableB
DB2 == tableB, tableC
DB1 imported to -> DB2
DB2 == tableA, tableB, tableC //true clone should not contain tableC
the change is easy with --databases and --add-drop-database (see mysql docs). This adds the drop statement to the sqldump so your new database will be an exact replica:
$ mysqldump -h $ip -u $user -p$pass --databases $dbname --add-drop-database > $file.sql
$ mysql -h $ip $dbname -u $user -p$pass < $file.sql
of course replace the $ variables and as always, no space between password and -p. For extra security, strip the -p$pass from your command
$newdb = (date('Y')-1);
$mysqli->query("DROP DATABASE `".$newdb."`;");
$mysqli->query("CREATE DATABASE `".$newdb."`;");
$query = "
SELECT
TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA LIKE 'rds'
";
$result = $mysqli->query($query)->fetch_all(MYSQLI_ASSOC);
foreach($result as $val) {
echo $val['TABLE_NAME'].PHP_EOL;
$mysqli->query("CREATE TABLE `".$newdb."`.`".$val['TABLE_NAME']."` LIKE rds.`".$val['TABLE_NAME']."`");
$mysqli->query("INSERT `".$newdb."`.`".$val['TABLE_NAME']."` SELECT * FROM rds.`".$val['TABLE_NAME']."`");
}

Resources