Really strange yet very common error. Relation "users" does not exist. I know what you are saying - that's been asked before! And it has, but work with me because I'm doing a bunch of checks and it still doesn't go through.
First, this is the migration:
CREATE TABLE users (
id serial PRIMARY KEY,
obfuscated_id VARCHAR(128) NOT NULL UNIQUE,
email VARCHAR(128) NOT NULL UNIQUE,
encrypted_password VARCHAR(128) NOT NULL,
created_at TIMESTAMP,
updated_at TIMESTAMP,
active BOOLEAN DEFAULT TRUE
);
And this is the command I'm using to run that migration:
migrate -path ./migrations -database postgres://myname:password#localhost:5432/five_three_one_development?sslmode=disable up
I've manually tested that the db exists:
psql
\c five_three_one_development
\dt
Schema | Name | Type | Owner
--------+-------------------+-------+----------
public | schema_migrations | table | myname
public | users | table | myname
I've manually altered the password on the table using /password and set it to password.
Here are the environment variables:
DB_NAME=five_three_one_development
DB_PASS=password
DB_USER=myname
And when I log those variables I get the same values back:
fmt.Printf("NAME:" + os.Getenv("DB_NAME"))
fmt.Printf("USER:" + os.Getenv("DB_USER"))
fmt.Printf("DB_PASS:" + os.Getenv("DB_PASS"))
I also perform an environment test at the top of my development server to check that the db is reachable.
func (c *ApplicationContext) PerformEnvChecks() {
err := c.Database.Ping()
if err != nil {
log.Fatalf("Database environment check failed: %s", err)
}
r, err := c.Database.Exec("SELECT * FROM users")
if err != nil {
log.Fatal(err) // --> pq: relation "users" does not exist
}
fmt.Printf("Tables: %v", r)
}
And then it fails on the c.Database.Exec("SELECT * FROM users") line which means that it's connected to the right database but it cannot find the table.
Out of ideas on this one - thoughts?
Edit
Feature request idea for postgresql folks: \connection_string -> returns the postgresql connection string given the current user inside of the database connected to.
Golang sql driver isn't clear about the order it expects. Ditch it. Save the whole connection string into a DATABASE URL variable and use that. Annoying but it seems to have solved the problem.
Pass this as a connection string: postgres://myname:password#localhost:5432/five_three_one_development?sslmode=disable and forget trying to build it up.
I accidentally dropped a table from pg_class and I have the same table present in a different server inside a schema. How do I restore it?
I have tried this
psql -U {user-name} -d {desintation_db} -f {dumpfilename.sql}`
This is what i'm getting -
ERROR: type "food_ingredients" already exists`
HINT: A relation has an associated type of the same name, so you must use a name that doesn't conflict with any existing type.`
ERROR: relation "food_ingredients" does not exist`
ERROR: syntax error at or near "19411"`
LINE 1: 19411 10405 2074 45.3333333333 0.17550085492131515 NULL NULL...`
ERROR: relation "food_ingredients" does not exist`
food_ingredients is the table which I dropped from the pg_class.
That's what you get from messing with system catalogs.
The simple and correct answer is “restore from a backup”, but something tells me that that's not the answer you were looking for.
You could drop the type that belongs to the table, all indexes on the table, all constraints, toast tables and so on, but you'd probably forget to drop something or drop something you shouldn't and end up with a bigger mess than before.
Moreover, the table file would be left behind and it would be hard to identify and delete it.
It would be appealing to try and recreate the pg_class row that you dropped, but you wouldn't be able to create it with the correct oid since you cannot directly insert a certain oid or update that column.
You could dump the whole database cluster with pg_dumpall, create a new cluster with initdb and restore the backup there, but this might fail because of the data inconsistencies.
Really, the best thing is to restore a backup.
I was able to perform a partial restore using the pg_dirtyread extension.
Initial situation was:
relation "messed_table" does not exist`
The following query provided me the values I had dropped:
SELECT * FROM pg_dirtyread('pg_class'::regclass)
as t(relname name, relnamespace oid, reltype oid, reloftype oid, relowner oid, relam oid, relfilenode oid, reltablespace oid, relpages integer, reltuples real, relallvisible integer, reltoastrelid oid, relhasindex boolean, relisshared boolean, relpersistence "char", relkind "char", relnatts smallint, relchecks smallint, relhasoids boolean, relhaspkey boolean, relhasrules boolean, relhastriggers boolean, relhassubclass boolean, relrowsecurity boolean, relforcerowsecurity boolean, relispopulated boolean, relreplident "char", relispartition boolean, relfrozenxid xid, relminmxid xid, relacl aclitem[], reloptions text[], relpartbound pg_node_tree)
WHERE relname = 'messed_table';
I used the result for performing an INSERT:
INSERT INTO pg_class
(relname,relnamespace,reltype,reloftype,relowner,relam,relfilenode,reltablespace,relpages,reltuples,relallvisible,reltoastrelid,relhasindex,relisshared,relpersistence,relkind,relnatts,relchecks,relhasoids,relhaspkey,relhasrules,relhastriggers,relhassubclass,relrowsecurity,relforcerowsecurity,relispopulated,relreplident,relispartition,relfrozenxid,relminmxid,relacl,reloptions,relpartbound)
VALUES('messed_table',16447,17863,0,10,0,17861,0,0,0,0,0,false,false,'p','r',78,0,false,false,false,false,false,false,false,true,'d',false,1129231::text::xid,1::text::xid,null,null,null);
At this stage executing a SELECT * from messed_table returned
catalog is missing 78 attribute(s) for relid 26130
So I created a new table messed_table_copy having the same structure of the messed table.
I exported to a file the pg_attribute content for the messed_table_copy table using this query:
Copy (SELECT * FROM pg_attribute WHERE attrelid = (SELECT oid from pg_class WHERE relname LIKE 'messed_table_copy') and attnum > 0) To '/tmp/recover.csv' With CSV DELIMITER ',' HEADER;
I changed the attrelid value to the relid value pointed out in the error message and I imported the data from the file again:
COPY pg_attribute(attrelid,attname,atttypid,attstattarget,attlen,attnum,attndims,attcacheoff,atttypmod,attbyval,attstorage,attalign,attnotnull,atthasdef,attidentity,attisdropped,attislocal,attinhcount,attcollation,attacl,attoptions,attfdwoptions) FROM '/tmp/recover.csv' DELIMITER ',' CSV HEADER;
At this stage SELECT count(*) FROM messed_table worked but a SELECT * FROM messed_table crashed the database with the following error:
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
Performing some queries specifying only a subset of all the available columns I noticed that only selecting certain columns a crash happened. I had been able to recover about 90% of the data, losing the content of some columns. Good luck and remember to never play with the pg_class table again.
There is this script in the SQL Server that needs to be converted in PostgreSQL.
Here is the script:
UPDATE Categories_convert SET Categories_convert.ParentID =
Categories_convert_1.CategoryID
FROM Categories_convert LEFT OUTER JOIN Categories_convert AS
Categories_convert_1 ON Categories_convert.MainGroupID_FK =
Categories_convert_1.MainGroupID
WHERE (((Categories_convert.Level)=2));
Then I tried to convert it to postgres. Here is the script:
UPDATE x_tmp_categories_convert SET orig.parentid =
cat2.categoryid
FROM x_tmp_categories_convert LEFT OUTER JOIN x_tmp_categories_convert AS
cat2 ON x_tmp_categories_convert.maingroupid_fk =
x_tmp_categories_convert.maingroupid
WHERE (((cat.level)=2));
Note that I have already created the table Categories_convert of SQLServer to Postgresql and renamed it to x_tmp_categories_convert .
All the fields in postgresql is in lowercase.
Now the problem is when i execute the converted script to postgresql, an error will occur:
ERROR: table name "x_tmp_categories_convert" specified more than once
SQL state: 42712
What I do wrong in the conversion?
UPDATE:
I have tried #a_horse_with_no_name 's answer but it didn't update the records at all. The parentid field is still empty. It is supposed to map all the parentid of that categor based on its maingoupid_fk.
Below is a snapshot of the records after executing the suggested script.
I have opted out the name for disclosure reasons.
Records snapshot link
UPDATE v2:
I am using php to migrate the data so pardon me for the variables used.
Here are the 2 insert statements used before the questioned update script is executed:
INSERT INTO x_tmp_categories_convert(maingroupid,name,vendorid,level,parentid)
VALUES ($id,'$mainGroup',$vendorID,1,Null);
INSERT INTO x_tmp_categories_convert(subgroupid,maingroupid_fk,name,vendorid,level)
VALUES ($id,'$mainGroupId','$subGroup',$vendorID,2);
Also, this is the table definition of the x_tmp_categories_convert table:
CREATE TABLE x_tmp_categories_convert
(
categoryid serial NOT NULL,
parentid double precision,
name character varying(255),
level double precision,
vendorid double precision,
maingroupid integer,
subgroupid integer,
maingroupid_fk integer,
pageid integer,
subgroupid_fk integer,
CONSTRAINT code_pk PRIMARY KEY (categoryid)
)
WITH (
OIDS=FALSE
);
UPDATE v3:
Already SOLVED by a_horse_with_no_name. Thank you
I have edited #a_horse_with_no_name 's answer and to make it work. I guess it is just a copy paste error.
Here is the final script:
UPDATE x_tmp_categories_convert
SET parentid = cat.categoryid
FROM x_tmp_categories_convert AS cat
WHERE x_tmp_categories_convert.maingroupid_fk = cat.maingroupid
AND x_tmp_categories_convert.level = 2;
For #a_horse_with_no_name, I am very grateful to you as I cannot come up with the solution without your help. +1 for you man. Thank you
I have a page that ask of users opinion about a topic. Their responses are then saved into a table. What I want to do is to check how many users selected an option 1,2,3 and 4.
What I have now are multiple T-SQL queries that run successfully but I believe there is a simplified version of the code I have written. I would be grateful if someone can simplify my queries into one single query. Thank you.
here is sample of data in the database table
enter image description here
$sql4 = "SELECT COUNT(CO) FROM GnAppItms WHERE CO='1' AND MountID='".$mountID."'";
$stmt4 = sqlsrv_query($conn2, $sql4);
$row4 = sqlsrv_fetch_array($stmt4);
$sql5="SELECT COUNT(CO) FROM GnAppItms WHERE CO='2' AND MountID='".$mountID."'";
$stmt5=sqlsrv_query($conn2,$sql5);
$row5=sqlsrv_fetch_array($stmt5);
$sql6="SELECT COUNT(CO) FROM GnAppItms WHERE CO='3' AND MountID='".$mountID."'";
$stmt6=sqlsrv_query($conn2,$sql6);
$row6=sqlsrv_fetch_array($stmt6);
$sql7="SELECT COUNT(CO) FROM GnAppItms WHERE CO='4' AND MountID='".$mountID."'";
$stmt7=sqlsrv_query($conn2,$sql7);
$row7=sqlsrv_fetch_array($stmt7);
You can do it by using group by in sql server
example :
create table a
(id int,
mountid nvarchar(100),
co int,
)
insert into a values (1,'aa',1)
insert into a values (2,'aa',2)
insert into a values (3,'aa',1)
insert into a values (4,'aa',2)
insert into a values (5,'aa',3)
Query
select co,count(co)as countofco from a
where mountid='aa'
group by
co
result
co countofco
1 2
2 2
3 1
Note : Beware of SQL injection when you are writing a sql query, so always use parametrized query. You can edit the above example code and make it as a parametrized query for preventing sql injection
I am trying to use Dapper support my data access for my server app.
My server app has another application that drops records into my database at a rate of 400 per minute.
My app pulls them out in batches, processes them, and then deletes them from the database.
Since data continues to flow into the database while I am processing, I don't have a good way to say delete from myTable where allProcessed = true.
However, I do know the PK value of the rows to delete. So I want to do a delete from myTable where Id in #listToDelete
Problem is that if my server goes down for even 6 mintues, then I have over 2100 rows to delete.
Since Dapper takes my #listToDelete and turns each one into a parameter, my call to delete fails. (Causing my data purging to get even further behind.)
What is the best way to deal with this in Dapper?
NOTES:
I have looked at Tabled Valued Parameters but from what I can see, they are not very performant. This piece of my architecture is the bottle neck of my system and I need to be very very fast.
One option is to create a temp table on the server and then use the bulk load facility to upload all the IDs into that table at once. Then use a join, EXISTS or IN clause to delete only the records that you uploaded into your temp table.
Bulk loads are a well-optimized path in SQL Server and it should be very fast.
For example:
Execute the statement CREATE TABLE #RowsToDelete(ID INT PRIMARY KEY)
Use a bulk load to insert keys into #RowsToDelete
Execute DELETE FROM myTable where Id IN (SELECT ID FROM #RowsToDelete)
Execute DROP TABLE #RowsToDelte (the table will also be automatically dropped if you close the session)
(Assuming Dapper) code example:
conn.Open();
var columnName = "ID";
conn.Execute(string.Format("CREATE TABLE #{0}s({0} INT PRIMARY KEY)", columnName));
using (var bulkCopy = new SqlBulkCopy(conn))
{
bulkCopy.BatchSize = ids.Count;
bulkCopy.DestinationTableName = string.Format("#{0}s", columnName);
var table = new DataTable();
table.Columns.Add(columnName, typeof (int));
bulkCopy.ColumnMappings.Add(columnName, columnName);
foreach (var id in ids)
{
table.Rows.Add(id);
}
bulkCopy.WriteToServer(table);
}
//or do other things with your table instead of deleting here
conn.Execute(string.Format(#"DELETE FROM myTable where Id IN
(SELECT {0} FROM #{0}s", columnName));
conn.Execute(string.Format("DROP TABLE #{0}s", columnName));
To get this code working, I went dark side.
Since Dapper makes my list into parameters. And SQL Server can't handle a lot of parameters. (I have never needed even double digit parameters before). I had to go with Dynamic SQL.
So here was my solution:
string listOfIdsJoined = "("+String.Join(",", listOfIds.ToArray())+")";
connection.Execute("delete from myTable where Id in " + listOfIdsJoined);
Before everyone grabs the their torches and pitchforks, let me explain.
This code runs on a server whose only input is a data feed from a Mainframe system.
The list I am dynamically creating is a list of longs/bigints.
The longs/bigints are from an Identity column.
I know constructing dynamic SQL is bad juju, but in this case, I just can't see how it leads to a security risk.
Dapper request the List of object having parameter as a property so in above case a list of object having Id as property will work.
connection.Execute("delete from myTable where Id in (#Id)", listOfIds.AsEnumerable().Select(i=> new { Id = i }).ToList());
This will work.