shp2pgsql "ERROR: INSERT has more target columns than expressions" - postgis
I'm using the script provided by loader_generate_script in Postgis to load Tiger census data and I'm getting an error while loading EDGE data. I was able to locate the error by appending -v ON_ERROR_EXIT=1 to all psql calls.
NOTICE: INSERT INTO tiger_data.al_edges(statefp,countyfp,tlid,tfidl,tfidr,mtfcc,fullname,smid,lfromadd,ltoadd,rfromadd,rtoadd,zipl,zipr,featcat,hydroflg,railflg,roadflg,olfflg,passflg,divroad,exttyp,ttyp,deckedroad,artpath,persist,gcseflg,offsetl,offsetr,tnidf,tnidt,the_geom) SELECT statefp,countyfp,tlid,tfidl,tfidr,mtfcc,fullname,smid,lfromadd,ltoadd,rfromadd,rtoadd,zipl,zipr,featcat,hydroflg,railflg,roadflg,olfflg,passflg,exttyp,ttyp,deckedroad,artpath,persist,gcseflg,offsetl,offsetr,tnidf,tnidt,the_geom FROM tiger_staging.al_edges;
ERROR: INSERT has more target columns than expressions
LINE 1: ...tpath,persist,gcseflg,offsetl,offsetr,tnidf,tnidt,the_geom) ...
The error says pretty clearly that the columns don't match (the SELECT statement is missing a column called divroad. How do I mitigate that in shp2psql since that is what is creating the INSERT statement. Any help is appreciated!
Turns out the issue stems from my postgis version. The Azure managed postgres uses POSTGIS="2.3.2 r15302" as the Postgis extension and it looks like TIGER2017 data issues were patched in 2.4.1 of postgis. The solution is to upgrade Postgis, though that may mean not using Azure managed for now.
Related
Errors in Data-Transfer due to quotes in column-fields
I am trying to do a standard data-transfer from a bunch of tables (from CSV, DB2 and PostGreSQL) into a PostGreSQL-instance. The tool I am using is Dbeaver Light's Data Transfer Task. However, I am facing a ton of errors and I have tracked it down to being due to the actual content of the data. What I see in the data are examples like this: Resulting in the following error: I believe the errors arise due to the " in the data since these are used to qualify colums like so: SELECT "Column1", "Column2" FROM "schema"."table" The setting I am using are: Can anyone think of a way to solve this?
Using CHANGETABLE() in SSDT Visual Studio 2019 results in unresolved reference to object
I'm in the process of migrating a bunch of existing databases to a Visual Studio project so it can be put in source control. Unfortunately there is 1 stored procedure which uses a table on which change tracking is enabled. It contains a left join to the following nested select statement: (select distinct ch.batch_number from CHANGETABLE(CHANGES dbName.dbo.batchTable, 0) as ch) This gives the following warning: SQL71562: Procedure: [dbo].[stMergeBatchInfo] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [CT].[batch_number] or [CT].[ch]::[batch_number]. It indicates that the column ch.batch_number is unknown. It appears that when the CHANGETABLE function is used the column definition is unknown to the project. The table being read is referenced through a DACPAC file and can be used in standard SELECT statements just fine. Does anyone know if there any way to get rid of these build warnings? Target SQL server is 2016. The database and table names are dummy names by the way.
I had a similar issue and actually came here looking for an answer. There are 2 things that might help Make sure you have master referenced in your proj. under the proj right click references and add system master. I did a schema compare from the db to the project and noticed my table was missing 'ALTER TABLE [dbo].[History_Table] ENABLE CHANGE_TRACKING' I hope either of these help you out.
I've encountered this before. In my case, the version of the master database reference didn't match the version of the Database Project. In my sqlproj file, the version of the database is 2014 (120). <DSP> Microsoft.Data.Tools.Schema.Sql.Sql120DatabaseSchemaProvider </DSP> Next, ensure your master database reference is the same version (120). <ArtifactReference Include="$(DacPacRootPath)\Extensions\Microsoft\SQLDB\Extensions\SqlServer\120\SqlSchemas\master.dacpac"> <HintPath> $(DacPacRootPath)\Extensions\Microsoft\SQLDB\Extensions\SqlServer\120\SqlSchemas\master.dacpac </HintPath> <SuppressMissingDependenciesErrors> False </SuppressMissingDependenciesErrors> <DatabaseVariableLiteralValue> master </DatabaseVariableLiteralValue> </ArtifactReference> Thanks
RepoDb does not seem to work for SQL Server tables with dot in the name
I'm starting to use RepoDb, but my SQL Server 2016 database has tables with a dot in the middle like this: User.Data. Moving from full .NET Entity Framework to RepoDb, I'm facing this issue. I'm using the fluent mapping and I wrote something like this: FluentMapper .Entity<UserData>() .Table("[User.Data]") .Primary(u => u.UserId) I get the exception: MissingFieldsException and it says: There are no database fields found for table '[User.Data]'. Make sure that the target table '[User.Data]' is present in the database and/or at least a single field is available. Just for curiosity, I created a table UserData with the same attributes and primary key and it worked great (change the fluent mapper whit: .Table("[UserData]"). Am I missing something? Thanks for helping me
The support to this in only available at RepoDb.SqlServer (v1.0.13) version or above. You can use either of the approaches below. Make sure to specify the quotes if you are using the database and schema. Via built-in MapAttribute. [Map("[User.Data]")] Via TableAttribute of the System.ComponentModel.DataAnnotations namespace. [Table("[User.Data]")] Via FluentMapper as you did. FluentMapper .Entity<UserData>() .Table("[User.Data]");
Can't add column to Crate table
I have an existing table created some time ago. The table is on a Crate cluster with 3 nodes. All notes are running version 0.54.9. When I run the following command: ALTER TABLE my_table ADD COLUMN size integer I get the following error involving the names of existing columns: SQLActionException[Merge failed with failures {[mapper [location] of different type, current_type [ip], merged_type [ArrayMapper]]}] The table has an ARRAY(ip) column called "locations" but I don't understand how this is related. When I ran the same command on a local instance with the same schema, it ran fine. A search online showed that the nearest error like the above that other people have come across has been related to Elasticsearch indexes. This suggests to me that the table (or its mapping in ES) may be corrupt but I'm not sure how to fix that as the cluster is currently in production. Does anyone have any ideas how one might check / repair this?
This is a known bug at Crate's 0.54 releases. Its already fixed but not yet released, see: https://github.com/crate/crate/commit/6d01cb8a45bb904f45ab1270975ef81e88bf776c Please be patient, upgrade to 0.55.0 (testing) or build Crate from source by yourself.
SQL Server Mgmt Studio shows "invalid column name" when listing columns?
I'm used to scripting in Python or Matlab, and my first couple hours with SQL have been infuriating. I would like to make a list of columns appear on the screen in any way, shape, or form; but when I use commands like select * from "2Second Log.dbo.TagTable.Columns" I keep getting the error: Invalid column name '[the first column in my table]'. even though I never explicitly asked for [the first column in my table], it found it for me. How can you correctly identify the first column name, and then still claim it's invalid!? Babies will be strangled. This db was generated by Allen Bradley's FactoryTalk software. What I would really like to do is produce an actual list of "TagName" strings...but I get the same error when I try that. If there were a way to actually double click the table and open it up and look at it (like in Matlab), that would be ideal.
Echoing juergen's suggestion in the comment above. It looks like you're running the query on the master database, not the 2Second Log database that actually has your table. (You can tell this by looking at the database in the dropdown in the top left of your screenshot). Two things you can do: Change the dropdown in the top left to 2Second Log. This will target your query to a different database Put your database name in brackets as suggested by juergen i.e. select * from [2Second Log].dbo.TagTable As an side, if you're looking for a good SQL tutorial, I highly recommend the Mode SQL tutorial. It's a fantastic interactive platform to get your SQL feet wet.
always use brackets when names/field have spaces or dashes. select * from [2Second Log].dbo.TagTable