Say I have a cube or more complex closed triangulated object in Blender, how would I be able to store this in postgis?
Postgis stores 3D geometry via well-known text (wkt) as a polyhedral or tin. Is there any way to get a blender object into postgis?
You read blenders data and create insert statements for postgresql. As blender contains a python interpreter, you can run a python script in blender that sends the data to postgresql.
The first step is installing a python postgresql module, such as psycopg that can be used within blender. There are several options for this, including adding a path to sys.path.
Once you can run a python script in blender that can talk to a postgresql server, read blenders mesh data to generate the insert statements.
pg_insert = 'INSERT INTO mytable (v_loc) VALUES ('
for v in obj.data.vertices:
pg_insert += 'POINT({} {} {}),'.format(v.co.x, v.co.y, v.co.z)
pg_insert += ');'
Related
I have a file ontobible.owl. how to extract that file and then save data to mysql (because I want display data from ontobible.owl in website). can anyone help me?
edited:
here is my ontobible.owl file (https://teamtrainit.com/ontobible.owl)
i've try open ontobible.owl with sublime text 3 and contains like this
<Verse rdf:about="http://www.semanticweb.org/budsus/ontologies/2021/7/ontobible#HOS5_2">
<verseID>HOS5_2</verseID>
<verse_text>And the revolters are profound to make slaughter, though I have been a rebuker of them all.</verse_text>
</Verse>
<Verse rdf:about="http://www.semanticweb.org/budsus/ontologies/2021/7/ontobible#2CH2_1">
<hasPerson rdf:resource="http://semanticbible.org/ns/2006/NTNames#god_1324"/>
<hasPerson rdf:resource="http://www.co-ode.org/roberts/family-tree.owl#solomon_2762"/>
<verseID>2CH2_1</verseID>
<verse_text>And Solomon determined to build an house for the name of the LORD, and an house for his kingdom.</verse_text>
</Verse>
how to convert that xml tag to array or json so I cant save it to mysql database
you have several options for extracting data from owl
use owl-api and write java code (i think owl api is accessible in other languages) to extract data and pack it in the format you need. also you can use sparql queries for extracting data via jena api
install protege, open your file in protege and save it in format json-dl. this format is very similar to the regular json and you can easily transform it for your needs
install fuseki server, add your file and using sparql queries extract data from there
i think that the second option is the easiest for start if you don't want to write queries or code and it won't take long
How does creating a large object work? Does there need to be a client, because all I am hoping to do is have an image be one column.
I am typing the following commands after creating my table but I just get an error about the path not being correct for the image (even though I have it starting right from the C drive).
CREATE TABLE image (name text,
raster oid);
INSERT INTO image (name, raster)
VALUES ('beautiful image', lo_import('C:Documents/etc/motd'));
I am not running any C code, am I suppose to do that or does this automatically create the object Large Object?
If I am suppose to run some C code where would I do it with respect to PostgreSQL?
Can I do what I want all with PostgreSQL syntax? Is there another way to approach including images as a field?
Any help will be greatly appreciated.
According to PostgreSQL documentation, there's two ways to handle large objects (considering Java JDBC):
To use the bytea data type you should simply use the getBytes(), setBytes(), getBinaryStream(), or setBinaryStream() methods.
and
LargeObject API.
Also, you can covert your image to a base64 string and then insert it directly using, for instance, PgAdmin:
CREATE TABLE image_table (name varchar(255), DATA bytea);
INSERT INTO image_table
VALUES ('my_image.jpg',
decode('paste your byte array string here', 'base64'));
Full sample code here.
I have been having some trouble converting polygons from Geometry to Geography.
I have found the following articles and have taken the steps advised but I am still unable to convert all the instances to valid geography objects.
Union start point to ensure correct orientation
blogs.msdn.com/b/edkatibah/archive/2008/08/19/working-with-invalid-data-and-the-sql-server-2008-geography-data-type-part-1b.aspx
Reduce polygon
blogs.msdn.com/b/edkatibah/archive/2009/06/05/working-with-invalid-data-and-the-sql-server-2008-geography-data-type-part-2.aspx
Buffer and negative buffer
http://www.beginningspatial.com/fixing_invalid_geography_data
Remove points from collection after reduce
http://alastaira.wordpress.com/2012/03/02/cleaning-up-artefacts-created-by-reduce/
My process thus far is the following.
Import shape file to SQL Server Developer Edition (64-bit) using Morten Nielsen's excellent Shape2Sql tool from www.sharpgis.net/page/SQL-Server-2008-Spatial-Tools.aspx
I had to import the shape file as geometry as I was getting errors about invalid geography values.
Then I tried to create geography's as per my SQL script below.
drop table [MapData].[dbo].[World_SeasSplitGeog]
SELECT
[IDX]
,[OBJECTID]
,[NAME]
,[ID]
,[Gazetteer_]
,[Shape_Leng]
,[Shape_Le_1]
,[Shape_Area]
,[ORIG_FID]
,dbo.RemoveArtefacts(GEOGRAPHY::STGeomFromWKB(shape.STUnion(shape.STStartPoint()).STBuffer(0.00001).STBuffer(-0.00001).Reduce(0.000001).STAsBinary(),4326).MakeValid()).MakeValid() as [Shape]
INTO
[MapData].[dbo].[World_SeasSplitGeog]
FROM
[MapData].[dbo].[World_SeasSplit]
The problem however is that not all the geometries are converted to valid geographies
I'm trying to find an effective way of saving the result of my Spark Job as a csv file. I'm using Spark with Hadoop and so far all my files are saved as part-00000.
Any ideas how to make my spark saving to file with a specified file name?
Since Spark uses Hadoop File System API to write data to files, this is sort of inevitable. If you do
rdd.saveAsTextFile("foo")
It will be saved as "foo/part-XXXXX" with one part-* file every partition in the RDD you are trying to save. The reason each partition in the RDD is written a separate file is for fault-tolerance. If the task writing 3rd partition (i.e. to part-00002) fails, Spark simply re-run the task and overwrite the partially written/corrupted part-00002, with no effect on other parts. If they all wrote to the same file, then it is much harder recover a single task for failures.
The part-XXXXX files are usually not a problem if you are going to consume it again in Spark / Hadoop-based frameworks because since they all use HDFS API, if you ask them to read "foo", they will all read all the part-XXXXX files inside foo as well.
I'll suggest to do it in this way (Java example):
theRddToPrint.coalesce(1, true).saveAsTextFile(textFileName);
FileSystem fs = anyUtilClass.getHadoopFileSystem(rootFolder);
FileUtil.copyMerge(
fs, new Path(textFileName),
fs, new Path(textFileNameDestiny),
true, fs.getConf(), null);
Extending Tathagata Das answer to Spark 2.x and Scala 2.11
Using Spark SQL we can do this in one liner
//implicits for magic functions like .toDf
import spark.implicits._
val df = Seq(
("first", 2.0),
("choose", 7.0),
("test", 1.5)
).toDF("name", "vals")
//write DataFrame/DataSet to external storage
df.write
.format("csv")
.save("csv/file/location")
Then you can go head and proceed with adoalonso's answer.
I have an idea, but not ready code snippet. Internally (as name suggest) Spark uses Hadoop output format. (as well as InputFormat when reading from HDFS).
In the hadoop's FileOutputFormat there is protected member setOutputFormat, which you can call from the inherited class to set other base name.
It's not really a clean solution, but inside a foreachRDD() you can basically do whatever you like, also create a new file.
In my solution this is what I do: I save the output on HDFS (for fault tolerance reasons), and inside a foreachRDD I also create a TSV file with statistics in a local folder.
I think you could probably do the same if that's what you need.
http://spark.apache.org/docs/0.9.1/streaming-programming-guide.html#output-operations
This is my first time trying to use Informix. I have around 160 tables to load, using pipe-delimited text files. We have an older series of batch files that a previous developer wrote to load Informix data, but they're not working with the new version of Informix (11.5) that I installed. I'm running it on a Windows 2003 server.
I've modified the batch file to execute the onpladm commands for one file, so this batch file looks like this:
onpladm create project dif31US-1-table-Load
onpladm create object -F diffdbagidaxsid.dev
onpladm create object -F diffdbagidaxsid.fmt
onpladm create object -F diffdbagidaxsid.map
onpladm create object -F diffdbagidaxsid.job
When I run this, it successfully creates the project and device array,
but I get an error creating the format. The only error I get is:
Create object DELIMITEDFORMAT diffile1fmt failed!
Invalid format!
The diffdbagidaxsid.fmt file is as follows:
BEGIN OBJECT DELIMITEDFORMAT diffile1fmt
PROJECT dif31US-1-table-Load
CHARACTERSET ASCII
RECORDSTART
RECORDEND
FIELDSTART
FIELDEND
FIELDSEPARATOR |
BEGIN SEQUENCE
FIELDNAME agid
FIELDTYPE Chars
END SEQUENCE
BEGIN SEQUENCE
FIELDNAME axsid
FIELDTYPE Chars
END SEQUENCE
END OBJECT
As you can see, it is only 2 columns. It originally had nothing following the CHARACTERSET. I've tried it with ASCII, and with the numeric code for ASCII, and still get the same error.
Is there any way to get a more verbose error message?
Also, can anyone recommend a decent (meaning active community) forum for Informix? I've tried the old comp.databases.informix forum, http://www.dbforums.com, the 'official' forum on IBM DeveloperWorks, and here of course. None have very much activity. We have to do this testing because we have customers (or maybe just 1 big one) who uses it, so we have to test our data and API against it.
Succinctly, I don't think there is a way to get much more information out of onpladm.