Filemaker export that maintains table information in the headers - database

When we use the export records function in Filemaker we can see header information in the file (i.e., using the merge format) but the table information is missing. Is there a way of keeping the table info in the exported file?
So for example we have a table named 'T3' but when we export fields from this table the resulting .csv file reads:
__Delirium_DRStotal_score
instead of
T3::__Delirium_DRStotal_score
Any help much appreciated,
Many thanks
Steve

FileMaker can export related fields too and theese preserve their table name on export. At least this happens when you export into XML. So if you self-join the table to itself by a unique ID and export (identical) data from the related copy, it should give you the names you want. (Almost, because you'll have to name that other table differently.)

Related

PBI - Export underlying data want have all of the columns

I'm trying to export columns with Underlying export but export from different visuals has different results even though it should always contain the same columns.
So far I did dig up on internet that I have to:
add all desired columns for export to Fact table as new columns (if they are metrics, convert them to columns and to text, if they are in different Dimension type table, use in Fact table in new column function RELATE and add the concrete metric/column so you would have all of them in Fact table and as columns with text type.
allow in your settings the Underlying export,
profit?
But so far no luck.
I did notice if I do the export from a visual in Fact table, I have all the columns ( all wanted exported colums not being in Fact table added to the Fact table with RELATED function as new columns ), but if I do the export from a visual being in Dimension table, the only columns I see are from that concrete Dimension table. Even if they do have a connection with each other and both have IDs with no duplications.
Any ideas?

import data into a specific column in hana table

I have loaded data into HANA using a CSV file and now I have added a new column to the table by
ALTER TABLE <tablename> ADD (<columnname> <datatype>);
Now I want to import data into this specific column using a CSV file. I could not find any methods to import to a specific column. Any help is appreciated. Thanks in advance.
The CSV import feature doesn't allow for partial loading and "column filling".
What you could do is to load the new data (together with the key columns of course) to a staging table and then update the target table columns from there.
Be aware that the CSV import is not meant to be an ETL solution. For that, the smart data integration features are there.

How should I incorporate different types of files into a database of products?

Context: I recently started a new job. I found my colleagues were exchanging information (product spec sheets, 3D renderings, etc) via files and email, which creates the infuriating situation where there are multiple versions of files being passed around. I decided to start building a solution using FileMaker to resolve this, mainly because I'm not really a technical person and FileMaker seems pretty easy to understand. I have been learning both database design and FileMaker literally from scratch.
Purpose: The solution will needs to be able to do the following:
Allow central management of data and files
Export a product roadmap for sales people
Export current product catalogue for sales people
Export product spec sheets
This, in my mind, will help everyone by maintaining a single set of accurate data which can be exported in different views.
Question: What is the best way to incorporate different types of files into the database?
For some views, I would like to be able to show related files, including 3D renderings, images, SoC data sheets, user manuals, etc. What would the schema look like?
Regarding files, I have the following tables:
Files (FileID, FileFormatID, FileName, FileTypeID, FileContainer, DateCommited, DateModified, TimeModified, Comment)
FileFormats (FileFormatID, FileFormat), where FileFormat is svg, pdf, Word, png, jpg, etc...
FileTypes (FileTypeID, FileType), where FileType is 3D Rendering, Gerber, Photo, Certification, QIG, etc...
Solution generated by my feeble mind:
ProductFiles (ProductID, FileID), where ProductID is the key in a Products table.
SoC_Files (SoC_ModelNo, FileID), where SoC_ModelNo is the key in an SoC table.
This way I can include in my views a list of files related to a product or SoC, showing only the FileTypes or FileFormats I need.
However, this seems messy. Is there a better way to do this?
Thanks! It's my first question on StackOverflow, so please let me know if the question is unclear or inappropriate in any way.
EDIT: The SoCs are not products themselves, they're used in the products. Some customers want that information. Each file can belong to multiple products or SoCs, and each product or SoC can have more than one file.
I suspect we need more information about what your solution is about. If it's chiefly about documentation, then the differences between the objects being documented are most likely irrelevant.
In any case, you describe a many-to-many relationship between Files and Products - so you should have a join table between these two, where each combination of file-to-product will be stored as an individual record.
If it turns out that you do need a separate SoC table, you could turn the join table into a "star-join" table - meaning it would have fields for:
FileID
ProductID
SoCID
and in each record either the ProductID or the SoCID field would be empty.
Note that in Filemaker you have another option to establish a many-to-many relationship: you could use a checkbox in the Files table to select the products which the file documents. However, in such case, (1) you won't be able to record anything about a specific file-to-product join and (2) it will be more difficult to produce a report of files-by-product or vice versa.
The FileFormats table is redundant and can be replaced by a custom value list: file extensions are unique and unchanging, and you have nothing to record about any of them. I have a feeling the same is true about the FileTypes table.
An exception to the above: if you can have multiple versions of the same file in different formats, you may need to add another table for the physical files.

How to do an initial batch import of CSV / MySQL data into neo4j database

I am considering replacing a MySQL database with a neo4j database. I am a complete beginner with neo4j and would like to know how to go about doing a batch insert of my current MySQL data into the neo4j database so i can experiment and begin to learn about neo4j.
the relational database consists of 4 tables: Person, Organism, Story, Links.
Links describes relationships between rows in the other 3 tables.
Links:
ID, FromTable, FromID, ToTable, ToID, LinkType
Person:
ID, property_2, property_1, etc ...
Organism:
ID, property_A, property_B, etc ....
Story:
ID, property_x, property_y
each ID field is an auto incrementing integer starting from 1 for each table
In case it is not obvious, a link between say person with ID 3 and a story with ID 42 would have a row in the Links table ID=autoincrement, FromTable=Person, FromID=3, ToTable=Story, ToID=42.
Even though I am using the terms 'from' and 'to' the actual links are not really 'directed' in practice.
I have looked at Michael Hunger's batch-import but that seems to only work with a single table of nodes and one table of relationships, whereas I am looking to import three different types of nodes and one list of relationships between them.
I have got neo4j up and running,
Any advice to get me started would be greatly appreciated.
I am not familiar with Java, though I do use Python and bash shell scripts.
After initial import, I will be using the RESTful interface with Javascript.
Based on advice in the git repo. Using Michael Hunger's batch-import it is possible to import multiple node types from the one .csv file.
To quote Michael:
Just put them all into one nodes file, you can have any attribute not
having a value in a certain row, it will then just be skipped.
So the general approach i used was:
combine all the nodes tables into a new table called nodes:
Create a new table nodes with an auto incrementing newID field and a type field. the type field will record what table the node data came from
Add all the possible columns names from the 3 node tables allowing nulls.
INSERT INTO nodes the values from Person, then Organism, then Story, in addition to setting the type field to person, organism, or story. Leave any unrelated fields blank.
in another new table rels add the newly created newID indexes to the Links table based on a sql JOIN:
INSERT INTO rels
SELECT
n1.newID AS fromNodeID,
n2.newID AS toNodeID,
L.LinkType,
L.ID
FROM
Links L
LEFT JOIN
nodes n1
ON
L.fromID = n1.ID
AND
L.fromType = n1.type
LEFT JOIN
nodes n2
ON
L.toID = n2.ID
AND
L.toType = n2.type;
Then export these two new tables nodes and rels as Tab seperated .csv files, and use them with batch-import:
$java -server -Xmx4G -jar target/batch-import-jar-with-dependencies.jar target/graph.db nodes.csv rels.csv
As you say that you are happy working with python and shell scripts, you may also want to have a look at the command line tools which come with py2neo, in particular geoff. This uses a flat file format I put together for holding graph data so in your instance, you would need to build a flat file from your source data and insert this into your graph database.
The file format and server plugin are documented here and the py2neo module for the client application is here.
If anything is missing from the docs or you want more information about this then feel free to drop me an email
Nigel

Oracle -- Import data into a table with a different name?

I have a large (multi-GB) data file exported from an Oracle table. I want to import this data into another Oracle instance, but I want the table name to be different from the original table. Is this possible? How?
Both importing and exporting systems are Oracle 11g. The table includes a BLOB column, if this makes any difference.
Thanks!
UPDATES:
The idea here was to update a table while keeping the downtime on the system that's using it to a minimum. The solution (based on Vincent Malgrat's answer and APC's update) is:
Assuming our table name is A
Make a temp schema TEMP_SCHEMA
Import our data into TEMP_SCHEMA.A
CREATE REAL_SCHEMA.B AS SELECT * FROM TEMP_SCHEMA.A
DROP TABLE REAL_SCHEMA.A Rename REAL_SCHEMA.A to REAL_SCHEMA.A_OLD
Rename REAL_SCHEMA.B to REAL_SCHEMA.A
DROP REAL_SCHEMA.A_OLD
This way, the downtime is only during steps 4 and 5, both should be independent of data size. I'll post an update here if this does not work :-)
If you are using the old EXP and IMP utilities you cannot do this. The only option is to import into a table of the same name (although you could change the schema which owns the table.
However, you say you are on 11g. Why not use the DataPump utility introduced in 10g, which replaces Import and Export. Because in 11g that utility offers the REMAP_TABLE option which does exactly what you want.
edit
Having read the comments the OP added to another response while I was writing this, I don't think the REMAP_TABLE option will work in their case. It only renames new objects. If a table with the original name exists in the target schema the import fails with ORA-39151. Sorry.
edit bis
Given the solution the OP finally chose (drop existing table, replace with new table) there is a solution with Data Pump, which is to use the TABLE_EXISTS_ACTION={TRUNCATE | REPLACE} clause. Choosing REPLACE drops the table whereas TRUNCATE merely, er, truncates it. In either case we have to worry about referential integrity constraints, but that is also an issue with the chosen solution.
I post this addendum not for the OP but for the benefit of other seekers who find this page some time in the future.
I suppose you want to import the table in a schema in which the name is already being used. I don't think you can change the table name during the import. However, you can change the schema with the FROMUSER and TOUSER option. This will let you import the table in another (temporary) schema.
When it is done copy the table to the target schema with a CREATE TABLE AS SELECT. The time it will take to copy the table will be negligible compared to the import so this won't waste too much time. You will need two times the disk space though during the operation.
Update
As suggested by Gary a cleverer method would be to create a view or synonym in the temporary schema that references the new table in the target schema. You won't need to copy the data after the import as it will go through directly to the target table.
Use the option REMAP_TABLE=EXISITNG_TABLE_NAME:NEW_TABLE_NAME in impdp. This works in 11gR2.
Just import it into a table with the same name, then rename the table.
Create a view as select * from ... the table you want to import into, with the view matching the name of the table in the export. Ignore errors on import.

Resources