I've got an application which has been working fine for quite a while, but there is an annoying item that continues to get in the way on occasion.
Let's say that I use an object such as OracleDataReader or MySQLDataReader to pass the data to the sqlbulkcopy object for insert. Let's assume that all the columns maps just fine and for the most part, it all works well.
Granted, I don't have control over the source application or database (which is either MySQL or Oracle). So some goof goes into a different application and puts in a date on the invoice table of 5/31/0210. He really meant to put in 5/31/2010, but the application he's using is not validating the data very tightly and the Oracle database accepts it. For all intensive purposes, the data of 5/31/0210 is a valid date for the Oracle db. It might be stupid in terms of data entry, but it is what it is at this point.
Now our OracleDataReader comes along and is transferring this invoice table over to SQL Server via the SQLBulkCopy. It is passing the data to perfectly matched table with the right column names and data types. You can see what is going to happen. This date of 05/31/0210 from Oracle is not accepted by the SQL Server db engine, as the DATETIME field only allows dates from 1/1/1753 to 12/31/9999.
When it encounters this record, it simply fails and gives an overflow error. It doesn't skip the record, it kills the feed. So if it happens a thousand records in on a million record table, you don't get the remaining 999,000 records.
Is there anyway to get around this issue so that the feed will continue?
Ideally, I'd like to move the receiving SQL Server DB to 2008 and use DATETIME2, which would allow for these goofy dates, but unfortunately not all my clients are ready to move to this version yet, so I'm stuck with DATETIME in SQL 2000/2005/2008.
Any ideas on how to get around this without changing the SQL? Ideally, I wouldn't mind if it just skipped the record. I know that I could do this in the SQL for the datareder, but this would be extremely complicated when you have twenty date fields in a single query. It would be maintenance nightmare.
Any thoughts would be appreciated.
One option would be to change the datetime column type to varchar. Then add a derived column for converting the string to datetime. The trick would be to use a function in the derived column to validate the date and put an arbitrary datetime if the coversion will fail. If you do heavy date comparisons, persist the computed column and/or index it.
I say all of this under the impression that sqlbulkcopy is not able to do transforms. Maybe you can. Hopefully, someone will chime in with a way to.
SSIS would be great in this situation, as you could do the transform and also get the performance benefits of the bulk update lock.
Related
I have found several posts on using the GETDATE() function for SQL Server linked table while in an Access front-end VBA procedure. Those posts are focused on the WHERE clause of the query, but I have been unable to find corresponding information on use of GETDATE() for column assignment.
For example, I understand that in the WHERE clause, I would use something like this:
WHERE MyDate = CAST(GETDATE() AS DATE)
However, I am getting syntax errors in VBA when I try to assign the current date to a column, like this:
INSERT INTO MyTable ( SomeValue, TheDate ) SELECT 'Widget' AS Expr1, CAST(GETDATE() AS DATE) AS Expr2;
In this example, TheDate is defined as DateTime in SQL Server. Written like this, VBA reports "Syntax error (missing operator) in query expression 'CAST(GETDATE() AS DATE)'. I tried to surround the expression with Access-friendly # date delimiters, but no luck there.
After spending about 30 minutes searching stackexchange.com various ways for MS Access Date() in SQL, I have been unable to find this. However it is so simple I am sure it was already answered somewhere.
In MS Access you likely (not 100% sure for linked SQL, you have to experiment) should use Now() and Date() functions. First one is equivalent to getdate() in SQL, the second one returns current date without time.
If you run this in Access on a linked table (not a PT query), it should read:
INSERT INTO MyTable ( SomeValue, TheDate )
VALUES ('Widget', Date());
There seems to be some confusing here. If you building a Access query, then ZERO ZERO of the SQL server date functions and syntax matter. Your SQL MUST continue to be written to Access standards unless you using a pass-though query.
However, I seen this 100x times here.
What is the data type on sql server side?
Is it datetime, or datetime2?
And double, triple, qudadropes, (and more) check the linked table in desing mode.
If you link to SQL server using the standard legacy "SQL Server" driver. The one that been shipped for 20 years since windows 98SE?
You MUST check if Access is seeing those columns as text, or as date columns (which in Access always allow a time part if you want).
Access code, queries, forms and EVERYTHING should require ZERO changes if you migrate that data from Access to SQL server and link the table. Again: ZERO ZERO changes.
However, if you used datetime2 on the SQL server side? Then you CAN NOT use the legacy "SQL server driver" when linking table. The reason is they don't support the newer datetime2 format. As a result, Access will actually see, use, and process that column as a text column. You REALLY, but REALLY do not want that to occur.
Why?
Becuase then you spend the next week asking questions on SO about how some date code or column or query does not work.
again:
ZERO ZERO changes are required in Access. If your dates are starting to break, then the issue is not date formats, but that column is now being seen by access as a TEXT data type.
Soltuion:
Either change the sql side datetime2 columns to datetime, and re-link.
or
re-link your tables using a newer native 11 (or later - up to 18 now). that way, access will see/use/process the datetime2 as a correct date format in Access.
So, before you do anything? Open one of the Access tables linked to SQL server in design mode. (ignore the read only prmompt). Now, look at the data type assigned to the date columns. If they are text, then you have a royal mess.
You need to re-link using the newer ODBC drivers.
Zero of your existing code, sql and quires should be touched or even changed if you using a linked table to sql server. But then again, if you linked using the wrong SQL ODBC driver, then Access cannot see nor process those datetime2 columns as date - it will be using text, and you beyond really don't want to allow that to occur.
In summary:
Any date code, SQL updates, sorting, query, VBA code, form code, reports should continue to work with ZERO changes. If you are making changes to dates after a migration, then you done this all wrong, and those date columns are not being seen by access as date columns.
Either get rid of all datetime2 columns and then re-link (change them server side to datetime). Or re-link the tables using a native 11 or later ODBC driver. Either of these choices will fix this issue.
This is a fix that requires ZERO code, and zero changes to Access dealing with dates.
My company has a really old Access 2003 .ADP front-end connected to an on-premise SQL Server. I was trying to update the front-end to MS Access 2016, which is what we're transitioning to, but when linking the tables I get all the fields in this specific table as #Deleted. I've looked around and tried to change some of the settings, but I'm really not that into SQL Server to know what I'm doing, hence asking for help.
When converting the table to local, all the info is correctly displayed, so it begs the question. Also, skipping to the last record will reveal the info on that record, or sorting/filtering reveals some of the records, but most of the table stays "#Deleted"...
Since I know you're going to ask: Yes, I need to edit the records.. Although the snapshot method would work for people trying to view the info, some of us need to edit it.
I'm hoping someone can shed some light on this,
Thanks in advance, Rafael.
There are 3 common reasons for this:
You have bit fields in SQL server, but they are null. They should be assigned a default of 0.
The table in question does NOT have a PK (primary key).
Last but not least you need (want) to add a timestamp column. Keep in mind that this is really what we call a “row version” column (so it not a date/time column, but a timestamp column). Adding this column will help access determine if a record been changed, and this is especially the case for any table/form in Access that allows editing of “real” number data types (single, double). If access does not find a timestamp column, then it reverts to a column by column comparison to determine table changes, and due to how computers handle “real” numbers (with rounding), then such comparisons often fail.
So, check for the above 3 issues. You likely should re-run the linked table manager have making any changes.
I receive new data files every day. Right now, I'm building the database with all the required tables to import the data and perform the required calculations.
Should I just append each new day's data to my current tables? Each file contains a date column, which would allow for a "WHERE" query in the future if I need to analyze data for one particular day. Or should I be creating a new set of tables for every day?
I'm new to database design (coming from Excel). I will be using SQL Server for this.
Assuming that the structure of the data being received is the same, you should only need one set of tables rather than creating new tables each day.
I'd recommend storing the value of the date column from your incoming data in your database, and also having a 'CreateDate' column in your tables, with a default value of 'GetDate()' so that it automatically gets populated with the current date when the row is inserted.
You may also want to have another column to store the data filename that the row was imported from, but if you're already storing the value of the date column and the date that the row was inserted, this shouldn't really be necessary.
In the past, when doing this type of activity using a custom data loader application, I've also found it useful to create log files to log success/error/warning messages, including some type of unique key of the source data and target database - ie. if coming from an Excel file and going into a database column, you could store the row index from Excel and the primary key of the inserted row. This helps tracking down any problems later on.
You might want to consider having a look at SSIS (SqlServer Integration Services). It's the SqlServer tool for doing ETL activities.
yes, append each day's data to the tables; 1 set of tables for all data.
yes, use a date column to identify the day that the data was loaded.
maybe have another table with a date column and a clob column. The date to contain the load date and the clob to contain the file that you imported.
Good question. You most definitely should have a single set of tables and append the data daily. Consider this: if you create a new set of tables each day, what would, say, a monthly report query look like? A quarterly report query? It would be a mess, with UNIONs and JOINs all over the place.
A single set of tables with a WHERE clause makes the querying and reporting manageable.
You might do a little reading on relational database theory. Wikipedia is a good place to start. The basics are pretty straightforward if you have the knack for it.
I would have the data load into a stage table regardless and append to the main tables after. Once a week i would then refresh all data in the main table to ensure that the data remains correct as per the source.
Marcus
I’m having a problem with a linked Informix table in MS SQL Server 2008r2. When I query this table, it seems to ignore some of the criteria I’m passing to it but not others. For example if I put a condition on the rowdate field the remote query part of the execution plan does not show any WHERE clause but if I put criteria on another field such as ACD it does show.
It seems it does not pass any criteria on the rowdate field but does on all others.
I know the field is indexed on the Informix side. If it helps the table I’m linking is from Avaya CMS and it is linked via the OpenLink ODBC driver.
EDIT:
As far as I know it is Informix Dynamic Server 2000 and it is on Solaris. The column comes up as a DATE data type which is correct. I have tried passing the criteria as ‘2010-08-03 00:00:00’, ‘2010-08-03’, CONVERT(date,’2010-08-03’) and a few more variations. When the data is returned to SQL server it is in the format yyyy-mm-dd.
When I view the execution plan I can see the remote query with all the other criteria followed by a filter for only the rowdate field.
I know that rowdate is indexed and that the driver does normally communicate that information as we use it in other applications (Business objects and MS Access) and they don’t have a problem
I managed to figure it out but it is the strangest thing ever. I went down the route of passing the date in different formats. My default is to use the normal YYYY-MM-DD that of course did not work so I tried YYYY-MMM-DD, still nothing. After going through LOTS of combinations I found one that works Mmm-DD-YY and it has to be exactly that! SEP-21-2010 wont work but Sep-21-2010.
I wonder if this is just a strange hang up from Informix or something in the driver, anyway it works.
On a side note has anyone noticed how strange it is that people from the America write the date month, day year? Stop and think about it for a second, do you say the number 2410 as “Four hundred, ten, two thousand”? The best part about it is try asking yourself this, what day is American independence day? Most Americans say “That’s easy you limey person it is the 4th of July” hmmmm day month (year), the only date they say round the right way is the date they got their independence. I will leave it up to the SO community to see the irony in that
Example query below:
select *
from OPENQUERY (AVAYA, 'select row_date,starttime,intrvl,acd from root.hagent where
row_date = ''NOV-22-2012'' and acd = 1 and split = 1 and starttime = 1900')
By the way I managed to extract accurate data via both MMM-DD-YYYY and Mmm-DD-YYYY.
I have a database that I just converted the back end to SQL Server using SSMA. I left the front end in MS Access. I only converted the tables and not the queries. It already had some data in it and that moved over just fine.
All was going well until just recently. On opening the database and loading the main form Event Interest it started having problems with the first record of the subform, called Names. The first field in the first record has data sometimes and not others. This is a text field. When data is in the field it puts in random numbers. I believe they may be related to SQL somehow. When there is no data/missing you can select the field and hit the backspace button and the data will appear minus the one character you just errased. I have no idea what is going on.
Any help you can supply I would greatly appreciate it. Thank you in advance.
I am new to SQL Server and I have used older versions of MS Access for a few years.
I am not certain what the problem might be, but these are some considerations that come to mind:
try deleting and recreating your linked tables. Perhaps an update to the table structure (or view, if you're linked to a view) has invalidated some of the metadata stored in the table link in your front end.
does your table have a primary key? If not, you really need one. There really is no such thing as a properly-designed data table in a relational database that is PK-less.
does your table have a timestamp? If not, add one, as this helps Access keep track of whether or not the data has changed on the server.
However, let me add that none of these issues manifest themselves exactly with the symptoms you've described, so they may not help.