Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am assigned with the task of taking all tables and records we have within QuickBase and importing them into a new database in MS SQL Server. I am an intern so this is new to me. I have been able to export all tables except two of them into CSV files so that I can import them into SQL Server. The two tables that will not export show a QuickBase error saying that the report is too large and the maximum number of bytes in report have been exceeded. My question is can someone recommend a work around for this? And also, do the CSV files have to be located on the server to import them, rather than have them stored on my machine?
Thanks in advance for any help.
When you export to CSV files, those files download from the browser onto your local machine, NOT the server. If you are running into issues with reports being too large, the filtering workaround above is a good enough.
The other possibility is to use the QuickBase API here:
http://www.quickbase.com/api-guide/index.html
Specifically, API_DoQuery is what you want to use.
I'm not a QuickBase expert or anything, but some poking around indicates that is a limitation within QuickBase reporting. One option would be to export those tables in sections: Filter first for records older than 60 days, then for records 60 days old or newer; or some sort of filter like that that splits the tables into two or more chunks of mutually exclusive records.
When importing the CSVs with the import wizard, the wizard will give you the opportunity to navigate to the file. If you are running SSMS on your local computer (which you should be), then the file will be accessible if it's on your local machine.
You may try some existing OpenSource ETL tool to do that directly (QB->MSSQL) - i.e. without landing any intermediate CSV file.
Look for example at CloverETL (which I used to interact with QuickBase):
http://doc.cloveretl.com/documentation/UserGuide/index.jsp?topic=/com.cloveretl.gui.docs/docs/quickbaserecordreader.html
They have a Community edition (free): http://www1.cloveretl.com/community-edition
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
We are migrating Delphi 7 applications to Delphi XE. We are replacing BDE database component with ADO .In Delphi 7 application we have heavily used TwwQuery (Info Power) component. Though TwwQuery is only supported by BDE ,we have to replace TwwQuery with ADOQuery.We have around 20 + application to migrate and going to all TwwQueries and replace them manually is very time consuming. Is there any code or script which can replace all the TwwQuery to ADO Query?
GExperts has a wizard that will do this. You can right-click on any TwwQuery, and choose to replace it with a TADOQuery. There is an option to do this for all instances it finds in the application, as well as the selected one.
The SQL property should map across with no problem - obviously you then need to find some way of setting the Connection property to an ADO Connection. Or you could do this at runtime by writing some shared initialisation code which you could add to each form, that loops through the components looking for TADOQuery and setting the connection property when it finds one.
(Remember to also check your uses clause for the DBTables unit as well as removing the wwXXX entries - if you don't remove all references to it I'm pretty sure the BDE will still be needed)
reFind.exe, the Search and Replace Utility Using Perl RegEx Expressions
http://docwiki.embarcadero.com/RADStudio/XE5/en/ReFind.exe,_the_Search_and_Replace_Utility_Using_Perl_RegEx_Expressions
I think it must be:
refind *.pas *.dfm /I /W "/P:TwwQuery" /R:TADOQuery
I am not aware of an existing script or library that will do exactly what you want but it shouldn't be too difficult to write one yourself. The basic theory would be something like this:
Load the contents of the DFM and PAS source files as text.
Scan through looking for instances of TwwQuery.
Replace the definition of the component with TADOQuery.
Update each query's properties with the ADO equivalents (I believe
they are fairly similar though).
Save the file.
Obviously there's a bit of trial and error involved in getting it exactly right but once you have done it should work for all of your applications.
I know this is not what you asked, but I will share my experience with you. My biggest problem with legacy code in database applications is that the main logic of it is dataset-dependent.
When I faced the very same problem you do now (I was going to replace TQuery and TwwQuery by TADOQuery) I decided to stop my dependence of TxxxQuery and become dependent on TClientDataset (CDS). I found CDS a much better dataset to work with, It has some features you won´t find in other datasets, like aggregate fields, for instance. With CDS you can load new records on demand without selecting all the rows again and you have TDatasetField as another way of handling master-detail relationships. To me it was more than enough.
Besides, the particular database access dataset stays behind TDatasetProvider component. Your main logic will be dependent on CDS only, not TADOQuery, TODACQuery or any other TxxxQuery that you may need in the future. That was the last time I had to worry about this issue. If I have to replace my datasets, it won´t affect the critical business logic anymore, only persistence logic (which I moved to another DataModule)!
So, I planed all my evolution strategy to move from TQuery and TwwQuery to TClientDataset as the first step and then replaced TQuery and TwwQuery by TADOQuery as a second step.
I didn´t use any helpers to refactor the code. It was indeed a lot of work but it had to happen only once and for all.
Nowadays I have replaced the middleware datasets (TQuery, TwwQuery, TADOQuery, etc) by a persistence service that is able to run a query and return an OleVariant with the records found. All I have to do is to assign this OleVariant to the TClientDataset.Data property and it´s done!
No more dependence of any kind of dataset in the application´s code, except for CDS!
I did see some other posts on this, but they were rather old and there does not appear to be any solutions at this point.
I'm trying to determine where a particular table(s) that SSIS is loading during a monthly job is being used in other packages. The package that loads these tables have in the past several months been taking much longer than before, and I'm trying to see if I can eliminate this load all together.
I just happened to check the Allocation packages in our database to see how the tables were being used, and discovered that I can't find anywhere when/where those tables are being used. Is there a function or query I can run in SSMS or elsewhere to determine how to find this information?
Thx in advance - please let me know if I need to clarify something.
The packages are just XML files. If you have the packages somewhere on your file system you can use any program that searches through text files.
I'm not sure about older SSIS projects but with an SSIS project in Data Tools for SQL Server 2012 you can just use the build in search function to search through your entire solution. It will also search in the XML of all the packages.
If you don't have this particular information saved anywhere already in your documentation then I think you are going to have some difficulty in finding an accurate way to retrieve this information. However, there are a few automated data collection options that might help you get most of the way there.
The first option is that because all SSIS Packages are essentially glorified XML that is being fed into an engine you can perform a patterned search on the packages like GREP to look for that particular table name. Any packages that dynamically retrieve and build the table name though would not be found through this method.
Another option would be to run a server side SQL trace with a pattern match based on the table name(s) and limited to the host or application name of SSIS. Run over the course of a month or so would make for a fairly accurate list.
I haven't used it myself, but the DOC xPress tool from PragmaticWorks might be what you're looking for.
This question already has answers here:
Storing Images in DB - Yea or Nay?
(56 answers)
Closed 6 years ago.
I use java, tomcat, jsf and primefaces
and I have a field "image" in my product table and I 'd like to ask you which is better : save the image in the database or in one directory in the server
if the second case is the best, please explain me more how to manage it (I never had the opportunity to meet this case)
thank you in advance
Whether or not you want to save the image binary in database or save the image file in server directory depends on your need.
The never ending debate is still going on and you can find it easily with several key strokes.
Here is how I do it(second approach) and it never disappoints me:
Assuming you are building an e-commerce website and want to host seller uploaded images such as product images and show them later.
Save the uploaded image files physically on your server outside of your war .Let's say the directory is /baseFolder/ and the full path of the file is /baseFolder/path/to/image.jpg.
Add a virtual host in Tomcat so that your http://localhost:8080/imageBase/ is pointing to the physical directory /baseFolder/. There are many other ways to make this kinds of mappings.
Store the relative path /path/to/image.jpg in your database. So that when you want to show it you can simply say:<img src="#{webappRoot}/baseFolder/#{pathRetrievedFromDatabase}"/>. In this case it would be <img src="#{webappRoot}/baseFolder/path/to/image.jpg"/>
There are of course a lot of different ways to achieve the same thing.
This is the simplest way I can think of to explain how it is managed.
Hope it helps.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I wish to make a regular backup of my notes stored on my iPhone, iPad and Mac OS in the standard Notes.app. Unfortunately since Apple moved these from their standard IMAP format to a database format (and added a separate app) this is close to impossible.
I currently have over 200 notes and growing. I suppose they are stored in a standard database format and get synced to iCloud and pushed to all devices.
Notes seems to store its data in this path:
"Library/Containers/com.apple.Notes/Data/Library/Notes/"
If anyone of you can reliable read, and perhaps even backup/restore this database, then please comment.
There is an Apple KB article HT4910 that deals with this issue, which proves of little help. In fact their method complicates issues and is very unelegant for multiple backups.
Time Machine, Apple's own built-in backup solution is also of little help as it seems to skip backup and allow no restore for notes.
I'd be grateful if someone could peruse this and come up with solutions, which would be appreciated certainly by many of the growing community of iCloud users.
Ok, this is a somewhat incomplete answer but I wanted to post it so people may be able to contribute.
Using this LSOF command:
lsof -c Notes | GREP /Users/
I was able to figure out that most of the Notes.app data was being stored here:
/Users/USERNAME/Library/Containers/com.apple.Notes/Data/Library/Notes
In that folder there are three files (for me at least):
NotesV1.storedata
NotesV1.storedata-shm
NotesV1.storedata-wal
Which strangely enough pointed me in this direction:
https://superuser.com/questions/464679/where-does-os-x-mountain-lion-store-notes-data
I also found a SqlLite cache database here:
/Users/USERNAME/Library/Containers/com.apple.Notes/Data/Library/Caches/com.apple.Notes/Cache.db
Though investigating it with Sqlite3 only turned up a few un-interesting tables:
sqlite> .tables
cfurl_cache_blob_data cfurl_cache_response
cfurl_cache_receiver_data cfurl_cache_schema_version
If you need to get back your notes, you first disconnect from internet... Then copy your notes to a safe place (desktop).. Then delete the folder at your library and then copy the safe folder back to notes at library...
you can see one old file date..that is the main file you need to get back the notes.. You can delete the other 2 new dated ones as they are copy from icloud..
Now you can enjoy opening your note.app and you will see all your old notes are back
I was poking around trying to accomplish the same thing and found where the notes are stored in 10.8.5 Mountain Lion. It is very straight forward. The location is as follows:
/Users/(your user)/Library/Mail/V2/Mailboxes/Notes.mbox/(long number with hyphens)/Data/Messages/
The individual notes are stored in that location with a number.emlx name format.
if you copy the Notes.mbox, that should get them all.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
The project I am working on were are trying to come up with a solution for having the database and code be agile and be able to be built and deployed together.
Since the application is a combination of code plus the database schema, and database code tables, you can not truly have a full build of the application unless you have a database that is versioned along with the code.
We have not yet been able to come up with a good agile method of doing the database development along with the code in an agile/scrum environment.
Here are some of my requirements:
I want to be able to have a svn revision # that corresponds to a complete build of the system.
I do not want to check in binary files into source control for the database.
Developers need to be able to commit code to the continuous integration server and build the entire system and database together.
Must be able to automate deployment to different environments without doing a rebuild other than the original build on the build server.
(Update)
I'll add some more info here to explain a bit further.
No OR/M tool, since its a legacy project with a huge amount of code.
I have read the agile database design information, and that process in isolation seems to work, but I am talking about combining it with active code development.
Here are two scenario's
Developer checks in a code change, that requires a database change. The developer should be able to check in a database change at the same time, so that the automated build doesn't fail.
Developer checks in a DB change, that should break code. The automated build needs to run and fail.
The biggest problem is, how do these things synch up. There is no such thing as "checking in a database change". Right now the application of the DB changes is a manual process someone has to do, while code change are constantly being made. They need to be made together and checked together, the build system needs to be able to build the entire system.
(Update 2)
One more add here:
You can't bring down production, you must patch it. Its not acceptable to rebuild the entire production database.
You need a build process that constructs the database schema and adds any necessary bootstrapping data. If you're using an O/R tool that supports schema generation, most of that work is done for you. Whatever is not tool-generated, keep in scripts.
For continuous integration, ideally a "build" should include a complete rebuild of the database and a reload of static testing data.
I just saw that you have no ORM tool... here's what we had at a company I used to work for
db/
db/Makefile (run `make` to rebuild db from scratch, `make clean` to close db)
db/01_type.sql
db/02_table.sql
db/03_function.sql
db/04_view.sql
db/05_index.sql
db/06_data.sql
Arrange however necessary... each of those *.sql scripts would be run in order to generate the structure. Developers each had local copies of the DB, and any DB change was just another code change, nothing special.
If you're working on a project that already has a build process (Java, C, C++), this is second nature. If you're using scripts in such a way that there is no build process at all, this'll be a bit of extra work.
"There is no such thing as "checking in a database change"."
Actually, I think you can check in database change. The trick is to stop using simple -- unversioned -- schema and table names.
If you have a version number attached to a schema as a whole (or a table), then you can easily have a version check-in.
Note that database versions doesn't have fancy major-minor-release. The "major" revision in application software usually reflects a basic level of compatibility. That basic level of compatibility should be defined as "uses the same data model".
So app version 2.23 and 2.24 use the version 2 of a the database schema.
The version check-in has two parts.
The new table. For example, MyTable_8 is version 8 of a given table.
The migration script. For example MyTable_8 includes a MyTable_7 to MyTable_8 script which moves the data, providing defaults or whatever is required.
There are several ways this is used.
Compatible upgrades. When merely altering a table to add a column that permits nulls, the version number stays the same.
Incompatible upgrades. When adding non-null columns (that need initial values) or changing the fundamental shape of tables or data types of columns, you're making a big change and you have a migration script.
Note that the old data stays in place until explicitly dropped at the end of the change procedure. You have to run tests to assure that everything worked.
You might have two-part drop -- first rename, then (a week later) finally drop.
Make sure that your O/R-Mapping tool is able to build the necessary tables out of the default configuration it has and also add missing columns. This should cover 90% of your cases.
The other 10% are
coping with missing values for columns that where added after the data was inserted
write data-migration scripts for the rare case where you need to do more fundamental changes between versions
See the DBDeploy open source project. http://dbdeploy.com/
It allows you to check in database change scripts. It will then produce a consolidated change script including all changes that have not been applied.
The site describes the process pretty well.
This project is based on the techniques in the Martin Fowler article that was mentioned before. I was on the project that Martin based the article on. DbDeploy is a pretty good implementation of the process we used.
The migrations facility of Ruby on Rails was developed to handle exactly this need. If you're not using Rails for your application, you might see if this same concept has been ported to the framework of your choice, or read up on it and determine whether you could write some quick scripts that implement the same sort of functionality.