An example of the file structure dbf - database

I have a short example on how to generate dbf files like
I saw the following link:
Data File Header Structure for the dBASE Version 7 Table File
I write my program with C #
For example, I want to produce the following table( to binary ):
Field Name Type MaxLength
-------------------------------------------
DSK_ID Character 100
DSK_ADRS Numeric 2

Are you trying to create the table within Foxpro (Visual Foxpro) itself?, DBase, or with a .net/java language. Your tabs are unclear as to what you are really getting into, and just creating the table via low-level is not the way to go.
I can modify this answer more, but suggest you edit your question to provide more detail.
The basic syntax, if using Visual FoxPro would be something like.
create table SomeTableName ( DSK_ID C(100), DSK_ADRS N(2,0) )
But again, would need more on the environment you plan on working with.
By knowing you want to do via C#, I would start by Downloading Microsoft's VFP OleDb provider.
Then, you can look at the many other links for connecting, querying (always parameterize) and execute what you need. Here is a short example to get a connection and create the table you want. Then it is up to you for querying, inserting, updating as needed.
OleDbConnection oConn = new OleDbConnection("Provider=VFPOLEDB.1;Data Source=C:\\SomePath");
OleDbCommand oCmd = new OleDbCommand();
oCmd.Connection = oConn;
oCmd.Connection.Open();
oCmd.CommandText = "create table SomeTableName ( DSK_ID C(100), DSK_ADRS N(2,0) )";
oCmd.ExecuteNonQuery();
oConn.Close();
Now, note, the "Connection" string has a Data Source. This should point to a PATH location where you WANT TO CREATE and/or QUERY the tables. You can have one connection that points to a folder that has 100+ tables and you can eventually query from any of them. But again, those are going to be other questions that you can find LOTS of answer to for sampling... for example, just search on
VFP OleDB C# and you will get plenty of hits

How are you going to handle memo files? Compound index files?
Just use the ODBC or Ole DB providers via COM InterOp and issue a CREATE TABLE.

Related

SQL Blob to Base 64 in Table for FileMaker

I have looked and found some instances there something similar is being done for websites etc....
I have a SQL table that I am accessing in FileMaker Pro (Through ESS) via an ODBC connection to the SQL database and I have everything I need except there is one field(LNL_BLOB) in one table (duo.MMOBJS) which is an image "(image, null)" which cannot be accessed via the ODBC connection.
What I am hopping to accomplish is find a way that when an image is placed in the field, it is ALSO converted to Base64 in another field in the same table. Also, the database creator has a "View" (Foreign Concept to us Filemaker Developers) with this same data called "dbo.VW_BLOB_IMAGES" if that is helpful.
If there is a field with Base64 text, within FileMaker I can decode it to get the image.
What thoughts do you all have? Is there and even better way?
NOTE: I am using many tables and lots of the data in the app that I have made, this image is not the only reason I have created the ODBC connection.
Table
View
Well, one way to get base64 out of SQL would be to trick the XML engine in SQL to convert your column to base64, then strip out the XML:
SELECT SUBSTRING(Q.Base64Data, 7, LEN(Q.Base64Data)-9)
FROM (SELECT
(
SELECT LNL_BLOB AS B
FROM duo.MMOBJS
FOR XML raw('r'), BINARY BASE64
) AS [Base64Data]) AS [Q]
You'd probably want to add that to your select statement or a view, rather than add it to the table; but, you could write a trigger that would maintain the field using that definition.

Export Azure SQL Database to XML File

I have searched Google and this site for about 2 hours trying to gather how to do this and no luck on a way that fits/ I understand. As the title says, I need to export table data to an XML file. I have an Azure SQL database with table data.
Table name: District
Table Columns: Id, name, organizationType, address, etc.
I need to take this data and create a XML file that I can save so that it can be given to others.
I have tried using:
SELECT *
FROM dbo.District
FOR XML PATH('districtEntry'), ROOT('leaID')
It gives me the data in XML format, but I don't see a way to save it.
Also, there are some functions I need to be able to perform with the data:
Program should have these options:
1) Export all data.
2) Export all rows created or updated since a specified date.
Files should be named in format ENTITY.DATE.XML, as in
DISTRICT.20150521.XML (use date in YYYYMMDD format).
This leads me to believe I need to write code other than SQL since a requirement would be to query the table for certain data elements as well.
I was wondering if I would need to download any Database Server Data Tools, write code, and if so, in what language, etc. The XML file creation would need to be automated I believe after every update of the table or after a query.
I am very confused and in need of guidance as I now have almost given up hope. Please let me know if I need to clarify anything. Thank you.
P.S. I would have given pictures but I do not have enough reputation to supply them.
I would imagine you're looking to write a program in VB.NET or C#, using ADO.NET in either case. Here's an MSDN article with a complete sample of how to connect to and query SQL Azure:
https://msdn.microsoft.com/en-us/library/azure/ee336243.aspx
The example shows how to write the output to the Console, but you could also write the output similarly using something like a StreamWriter to write it to a file.
You could also create a sqlcmd script to do this, following the guidelines here to connect using sqlcmd:
https://msdn.microsoft.com/en-us/library/azure/ee336280.aspx
Alternatively, if this is a process that does not need to be automated or repeated frequently, you could do it using SSMS:
http://azure.microsoft.com/en-us/documentation/articles/sql-database-manage-azure-ssms/
Running your query through SSMS would produce an XML document, which could be saved using File->Save As

Copying tables between databased with different authentication DB2

Hey StackOverflow community,
My question is as follows:
I have a table, say USER_ADDR with a bunch of columns in one database, say DB001
I need to copy the contents of this table(based on a criteria) to a similar table USER_ADDR (same name, yes) in another database DB002 with a different userID and pwd.
I need to do this in a stored procedure that will be executed using a .net framework.
I tried this:
INSERT INTO "DB002".USER_ADDR (--column names--)
SELECT *
FROM "DB001".USER_ADDR
WHERE ID = "APPLICATION_NO_IN";
I get:
0: Error occurred: [IBM][DB2/NT64] SQL0204N "DB002.USER_ADDR" is an undefined name. LINE NUMBER=15. SQLSTATE=42704 : -204: IBM.Data.DB2: 42704
What am I doing wrong?
Thanks in advance
Vashist
i'm deleting my other answer after seeing the additional info about your use case. Load is mainly for bulk loads of large numbers of records.
in this case i'd recommend you do something like open connection1 in .Net to your data source, select the data and hold it in a .Net DataTable. If required, you can do that select in a stored proc that returns either individual column values for a single row or return a cursor (rowset) that contains all the columns (and rows). Then in .Net open connection2 and insert the data from the DataTable to your destination. Again, that can be done with a stored proc.
Another approach is using an external script that connects to both databases.
From just one database is not possible, at least you use, as already mentioned, Information integration (federation) or by exporting the data and then loading it.

Copy data from one database to another using VB.NET

I need to copy data from one database to another using a VB.NET program.
The target database is SQL Server the source database is some proprietary ODBC compliant database.
I need to loop through a list of table to copy. Read the data from the source database table for a given modified date. Delete the corresponding date from the target database table and insert the records from the source table. The databases are of the same structure i.e. table names and field names, but the data types may differ (however they are compliant e.g. double in source, float in target). No primary keys exist.
Heres how I may do it :
Firstly execute a Delete command to the target.
I could then use a DataReader to obtain data from the source, loop through the Items and create an Insert Command for each row. Add Parameters to the Command with the appropriate values and execute. And wrap the whole thing in a Transaction.
I was just wondering if I am missing a trick here. Any Suggestions
I think you should use the right for the job and I'm guessing that that is SSIS in this case, but I could be wrong and perhaps you have already explored that path.
In that case yes a datareader would do depnding how much data you have. A datatable might even be eassier and faster to program (no need to worry about datatypes since the adapter should take care of that.
The trick would be to use set based operations and not the 'row at a time' concept which we programmers were first taught :)
Here's some pseudocode
INSERT INTO DestTable (columns, columns...)
(Select ModifiedRow from SourceTable where date = Modified)
Perhaps your requirements are more complicated and may need the row by row approach, but this is normally not the case.
I'd opt to put this code in a job step and schedule on SQL. It could also be a stored procedure run from .net.
Also, using SSIS for a db to db transfer is most likely overkill unless you are going to be using some of the special transformations in there.
Take a look at the SqlBulkCopy class. If you can get the source into a DataTable or read it with an IDataReader then it's eligible. It will also attempt to convert between compatible types. See Single Bulk Copy Operations for more details.
This would be more desirable than using INSERT statements for each row.
Dim reader As System.IO.DirectoryInfo
reader = My.Computer.FileSystem.GetDirectoryInfo("c:\program Files\Microsoft SQL Server\MSSQL.1\mssql\data")
If (reader.Attributes And System.IO.FileAttributes.ReadOnly) > 0 Then
MsgBox("File is readonly!")
Else
MsgBox("Database is not read-only protected")
End If
Check all the tables first

Growing MS Access File Size problem

I have a large MS Access application with a lot of computations in VBA code. When I run it it eventually crashes due to excessive file size. There are a lot of intermediate tables and queries created and subsequently deleted, but Access does not reclaim the space. I have diligently closed all intermediate record sets and set all temporary objects to nothing, but nothing helps. The only way I can get my code to run is to run part of it, stop and repair/compress the file then restart the code.
Isn't there a better way?
Thanks
You should be able to run the compact function from within your VBA code.
I had the below snippet bookmarked from a long time ago when I was doing access work.
Public Sub CompactDB()
CommandBars("Menu Bar").Controls("Tools").Controls("Database utilities").Controls("Compact and repair database...").accDoDefaultAction
End Sub
You can put that in your code to get around it.
NOTE: you might also consider growing to a larger db system if you are having these types of scaling issues.
What sizes are you dealing with? What is the error code when it crashes? I'd be surprised if it is simply because the file gets "too big", but I imagine there's a limit. It sounds from your description of all the temp stuff that there may be design improvements that would help.
EDIT: I expect you realize it's non-trivial to replace the database with something else - even if you try to keep whatever else is in the mdb besides the tables. Access querydefs are unique, Access SQL is non-standard and you'd be basically starting over.
Most Access applications I've seen have lots of opportunity for refactoring; and it's usually not that difficult if a) you understand the logic and the business rules, and b) you have a solid understanding of Access programming. But that would be more or less true for any alternatives. If I were you and you're a little short in either area, maybe you can get some help. But I'd try to rescue the Access app first.
There's also a suggestion from another poster about moving the tables into one or more attached MDBs. That's a solid, well-proven technique in general. But first I'd get a handle on what the real cause of the problem is.
I'd push the data over to MS SQL (the permanent data and the intermediate tables); and you can leave the code portion in MS Access for the time being.
This solves two big issues:
The data will be inherently more stable/dependable (I can't tell you how many times I've had a corrupt MS Access database).
Your Access database won't grow/change very much (it should reach an equilibrium once all the code in has been run and compiled).
Both of these mean no more having to compress/repair the database; you can get a free version (the Express Edition) of MS SQL and it is not that hard to do.
If you do not want to switch to SQL Express or similar, you could dig the following ideas:
Open another 'external' access database (mdb file) for all temporary tables, so you could put all temp data in the external file, throwing away the mdb file when you close your app. You will then manipulate in your code the 'currentDb' object and another database that you build at startup and connect to through jet, OLEDB or ODBC connection
Separate your permanent tables from your code and, when needed, bring the data into your local client interface to build your temporary tables. This can be done for example by linking the external database to the local/client file using "DoCmd.transferDatabase acLink". This can also be done by connecting to the permanent data through OLEDB connection, opening the needed recordset(s) and saving them locally as XML files. There are many other solutions that can be implemented here.
The state of affairs with regard to Jet file sizes is interminably problematic for me.
I am currently watching a piece of my own VBA code from Access database A as it does a series of single-record field updates using ADO to a table on Access database B (via a updateable-query reference in database A). The single field is a CHAR(8). With every 4 updates that go by, database B grows by about 8 Kbytes. No good excuse for that. The addition to the file size slows performance on this severely; with each file growth, updates slow from about one per second (in a table of about 30-40K records using single-record SQL lookups and no indexes anywhere) to one per 5-10 seconds.
Now, I admit, I did compact/repair database B prior to running this update code; perhaps if I had not done that, the performance would not have been this bad. Had the target field for update been of, say, type Memo, then I would have expected this. But to carry out an update on a CHAR() field and get this result is simply not reasonable.
Most of the above (no particular criticism for any one solution intended) appear to be valid solutions for applications that use a relatively permanent business application arrangement (talk to the same target databases all of the time). Mine is not so . . . I cannot alter the target database (database B), as it is generated and consumed by a vendor's tool that we use to export and import data from their application.
I understand and commend the above writers for coming up with solutions to users' problems. However, I cannot let it stand when poor software design/implementation gets in the way of users using a product as the users expect it to function.
I'm not an MVP, but Google found these. Maybe they'll help you:
http://www.mvps.org/access/general/gen0041.htm
http://forums.devarticles.com/microsoft-access-development-49/compact-database-via-vba-24958.html
Unfortunately, MS Access has problems when you get too large - I think the max size is 2GB for an access DB.
You may consider moving to Sql Express, VistaDB, etc.
According to http://office.microsoft.com/en-us/access/HP051868081033.aspx, Access 2003 and 2007 have a 2 GB limit. However, it's easy to move some or all the tables into a separate .mdb file and then link to those tables. It's good practice anyway to have two files, one for your data and one for all the macros, queries, and so on. You could even have multiple files if your table file gets near the 2 GB limit.
I have encountered a similar issue where my database was bloating on raw data import. Instead of splitting the database and compacting the backend routinely, I decided to use the database object (DAO) to create a temp database, import the data, query/modify data in that temp database, pull it over to your original database via SQL and then delete it. YBase code shown below:
Sub tempAccessDatabaseImport()
Dim mySQL As String
Dim tempDBPath As String
Dim myWrk As DAO.Workspace
Dim tempDB As DAO.Database
Dim myObject
'Define temp access database path
tempPathArr = Split(Application.CurrentProject.Path, "\")
For i = LBound(tempPathArr) To UBound(tempPathArr)
tempDBPath = tempDBPath + tempPathArr(i) + "\"
Next i
tempDBPath = tempDBPath + "tempDB.accdb"
'Delete temp access database if exists
Set myObject = CreateObject("Scripting.FileSystemObject")
If myObject.FileExists(tempDBPath) Then
myObject.deleteFile (tempDBPath)
End If
'Open default workspace
Set myWrk = DBEngine.Workspaces(0)
'DAO Create database
Set tempDB = myWrk.CreateDatabase(tempDBPath, dbLangGeneral)
'DAO - Import temp xlsx into temp Access table
mySQL = "SELECT * INTO tempTable FROM (SELECT vXLSX.*FROM [Excel 12.0;HDR=YES;DATABASE=" & RAWDATAPATH & "].[" & WORKSHEETNAME & "$] As vXLSX)"
'DAO Execute SQL
Debug.Print mySQL
Debug.Print
tempDB.Execute mySQL, dbSeeChanges
'Do Something Else
'Close DAO Database object
tempDB.Close
Set tempDB = Nothing
myWrk.Close
Set myWrk = Nothing
'Delete temp access database if exists
If myObject.FileExists(tempDBPath) Then
'myObject.deleteFile (tempDBPath)
End If
End Sub

Resources