Filemaker to Filemaker database Import Table issue - database

I have one filemaker database for my own use that I update regularly from a second filemaker database (which is active everyday).
I have a simple script that imports all data from active database into my own database. Both database are identical - just differ in records.
The problem I have is for tables which are used as portals. This data, when imported, shows the correct number of rows but only as duplicates of the last row.
This is my import script:
http://farm9.staticflickr.com/8165/7351115686_d7efbac90e_b.jpg
And this is the original table (left) and the table on my database after import (right):
http://farm8.staticflickr.com/7229/7351115740_c6677dfee5_b.jpg
Has totally thrown me - what am I doing wrong?
any help will be hugely appreciated,
Best
Steve

It looks like you're matching based on the foreign key, not the primary key. when you use the foreign it's matching as it iterates thru the lines and the last one is the one that stays.

Related

Find foreign keys based on data

I am looking at a database which has almost no foreign keys defined.
Is there a tool that can perform some data analysis/heuristics and "guess" the relations based on data. I am looking for some kind of report, which can be used as a manual guide/checklist.
I had a similar problem - Every Table had a Object_ID column... But had secondary IDs too.
All were of a wierd GUID-ish form.
I ended up writing a brute force scanner (using Dynamic sql from informtion_schema.columns)
Of course this approach relied on the values being globally unique... If you have a bunch of int identity cols and no way to connect the Tables then you are in a bit of trouble!
Perhaps there is a timestamp column or a DateTime defaulting to GetDate() - you could use this to identidy records in different tables that are created at approx the same time.
A lot depends on your schema...

SQL Server: foreign key showing one to one relation instead of one to many

From some mystical reason I starter the database design with the inbuilt Database Diagrams GUI designer (Server Management Studio), actually I only did the first 2 tables (users and product) there rest were done using query commands.
It turns out that at the end there’s something I didn’t expect between:
users (table)
product(table)
I’ve created a foreign key column (“users_id”) in the “product” table pointing to the “users” table (column “users_id”).
Instead of having a one to many relation It seem to be a one to one relation?
Users table is referencing the product table and I don’t want this.
What is the problem?
edit: 4-sep-2014 10:48
I've droped the FK_product_TO_users constraint and created a new one, but still the results are the same.
ALTER TABLE product
DROP CONSTRAINT FK_product_TO_users
GO
ALTER TABLE product
ADD
CONSTRAINT FK_product_TO_users
FOREIGN KEY (users_id)
REFERENCES users (users_id)
edit: 4-sep-2014 12:51
I've rebuilt the database, by using just Queries with no GUI help in the table design. The problem related to FK_product_TO_users was fixed, still I don't know why.
It comes out that after the resolution the same issue is present in two other tables with 2 FK relations.
Besides this, inputting data in those tables seems to work fine.
I'm wondering if this is just a bug of the GUI in the Database Diagram?
This is really interesting one.
You can do one thing: just delete the key FK_product_to_users and rebuild the key.
You do NOT need to delete users_id from product table.

Finding all pseudo related data within an SQL Server database

I have a requirement to change a "broken" computed column in a table to an identity column and as part of this work update some of the field values. This column is a pseudo primary key so doesn't have any constraints defined against it. I therefore need to determine if any other tables in the database contain a pseudo foreign key back to this column.
Before writing something myself I'd like to know if there is a script/tool in existence that when given a value (not a column name) can search across the data in all of the tables within an SQL Server database and show where that value exists?
Thanks in advance.
Quick google found this page/script:
http://vyaskn.tripod.com/search_all_columns_in_all_tables.htm
I don't personally know of a pretty GUI-interfaced utility that'll do it.

SQL Server (2005) - "Deleted On" DATETIME and Indexing

I have a question related to database design. The database that I'm working with
requires data to treated in some way that it is never physically deleted. We started going
down a path of adding a "DeleteDateTime" column to some tables, that is NULL by default but
once stamped would mark a record as deleted.
This gives us the ability archive our data easily but I still feel in the dark on a few areas, specifically
whether this would be considered in line with best practices and also how to go about indexing these tables efficiently.
I'll give you an example: We have a table called "Courses" with a composite primary key made up of the columns "SiteID" and "CourseID".
This table also has a column called "DeleteDateTime" that is used in accordance with my description above.
I can't use the SQL Server 2008 filtered view feature because we have to be
SQL Server 2005 compatible. Should I include "DeleteDateTime" in the clustered index for this table? If so should it be
the first column in the index (i.e. "DeleteDateTime, SiteID, CourseID")...
Does anyone have any reasons why I should or shouldn't follow this approach?
Thanks!
Is there a chance you could transfer those "dead" records into a separate table? E.g. for your Courses table, have a Courses_deleted table or something like that, with an identical structure.
When you "delete" a record, you basically just move it to the "dead table". That way, the index on your actual, current data stays small and zippy....
If you need to have an aggregate view, you can always define a Courses_View which unions the two tables together.
Your clustered index on your real table should be as small, static and constant and possible, so I would definitely NOT recommend putting such a date time column into it. Not a good idea.
For excellent info on how to choose a good clustering key, and what it takes, check out Kimberly Tripp's blog entries:
GUIDs as PRIMARY KEYs and/or the clustering key
The Clustered Index Debate Continues...
Ever-increasing clustering key - the Clustered Index Debate..........again!
Marc
what's your requirements on data retention? have you looked into an audit log instead of keeping all non-current data in the database?
I think you have it right on the head for the composite indexes including your "DeleteDateTime" column.
I would create a view that is basically
select {List all columns except the delete flag}
from mytable
where deletflag is null
This is what I would use for all my queries on the table. The reason why is to prevent people from forgetting to consider the deleted flag. SQL Server 2005 can easily handle this kind of view and it is necessary if you are goin to use thisdesign for delting records. I would have a separate index on the delted column. I likely would not make it part of the clustered index.

How to add a constant column when replicating a database?

I am using SQL Server 2000 and I have two databases that both replicate (transactional push subscription) to a single database. I need to know which database the records came from.
So I want to add a fixed column specified in the publication to my table so I can tell which database the row originated from.
How do I go about doing this?
I would like to avoid altering the main databases mostly due to the fact there are many tables I would need to do this to. I was hoping for some built in feature of replication that would do this for me some where. Other than that I would go with the view idea.
You could use a calculated column Use the following on the two databases:
ALTER TABLE TableName ADD
MyColumn AS 'Server1'
Then just define the single "master" database to use a VARCHAR column (or whatever you want) that you fill using the calculated columns value.
You can create a view, which adds the "constant" column, and use it as a replication source.
So the solution for me was to set up the replication publications to allow transformations and create a DTS package for each site that appends the siteid into the tables to keep the ids unique as I can't use guids.

Resources