Databases in offline software? - c

I'm primarily a web developer, currently learning C and planning on going into C++ in a year or so when I feel absolutely confident with C (Note: I'm not saying I'll be a master at C, just that I'll understand it in a fair amount of depth and will retain it properly rather than forgetting it when I see a new language).
My question is, how are offline/networked applications written with database functionality? I've built many-a database driven website in PHP and MySQL and would like to know how to use databases with my C projects - a lot of the applications I have the desire to write rely more on content management rather than processing data as such. What database formats are available to me? What should I be looking at to build a simple contact database for example?
Thanks in advance.

I'd suggest SQLite for file-based database. Mongo is pretty awesome too if you run it locally but it is still networked.

For a small application SQLLite might be a good option for you - it is part of your application and not dependant on other software but as a database is fairly weak (No triggers, no stored procedures afaik).
If you are looking for something more substantial (especially when it involves multiple users) you should be looking for MySQL or SQLServer. These can be accessed directly from their respective API's or via some kindof common mediator such as ODBC.
Your question is really very open, much application software depends on relational database technology at some level but the OS and the required task ussually dictate the best choices.

Going the SQL route with offline applications in C is not straightforward. Whereas the database storage brings in advantages, in terms of reliability e.g., it adds conversion steps during the save/load of your data, simply by using SQL.
The question is why would you want to create SQL commands as character strings to load/save the data that is treated as binary in your program, and that you can store as binary directly in your system local storage? It costs!
On the other side, if you already know SQL well, then you'll only have to learn about an (there are several) API to access a database (SQLite, MySQL ...) from C to get started.

Related

Database access in C

I am developing an application completely written in C. I have to save data permanently somewhere. I tried file storage but I feel its really a primitive manner to do the job and I don't want to save my sensitive data in a simple text file. How can i save my data and access it back in an easy manner? I come from JavaScript background and would prefer something like jsons. I will be happy with something like postgreSQL also. Give me some suggestions. I am using gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3.
sqlite seems to meet your requirements.
SQLite is an embedded SQL database engine. Unlike most other SQL
databases, SQLite does not have a separate server process. SQLite
reads and writes directly to ordinary disk files. A complete SQL
database with multiple tables, indices, triggers, and views, is
contained in a single disk file. The database file format is
cross-platform - you can freely copy a database between 32-bit and
64-bit systems or between big-endian and little-endian architectures.
These features make SQLite a popular choice as an Application File
Format. Think of SQLite not as a replacement for Oracle but as a
replacement for fopen()
Check out the quickstart
http://www.postgresql.org/docs/8.1/static/libpq.html
libpq is the C application programmer's interface to PostgreSQL. libpq is a set of library functions that allow client programs to pass queries to the PostgreSQL backend server and to receive the results of these queries.
I would recommend SQLite. I think it is a great way of storing local data.
There are C library bindings, and its API is quite simple.
Its main advantage is that all you need is the library. You don't need a complex database server setup (as you would with PostgreSQL). Also, its footprint is quite small (it's also used a lot in mobile development world {iOS, android, others}).
Its drawback is that it doesn't handle concurrency that well. But if it is a local, simple, single-threaded application, then I guess it won't be a problem.
MySQL embedded or BerkeleyDB are other options you might want to take a look at.
SQLite is a lightweight database. This page describes the C language interface:
http://www.sqlite.org/capi3ref.html
SQLite is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. SQLite is the most widely deployed SQL database engine in the world. The source code for SQLite is in the public domain.
SQLite is a popular choice because it's light-weight and speedy. It also offers a C/C++ interface (including a bunch of other languages).
Everyone else has already mentioned SQLite, so I'll counter with dbm:
http://linux.die.net/man/3/dbm_open
It's not quite as fancy as SQLite (e.g, it's not a full SQL database), but it's often easier to work with from C, as it requires less setup.

Create database software with cocoa

I'm looking to create software for Mac which is the front-end for a locally-created database.
What's the best way for me to do this in Cocoa?
Some more information:
The database will start simple, but eventually be fairly complex (I have 2-300 tables in my projected schema).
I want to save the database as a file (or bundle), so that from the user's perspective it's just a document.
I'm really just starting out with Cocoa, should I'd like to use a method that has a decent learning curve (so probably something built-into cocoa)
Potentially this would have to be distributed through the mac app store (sigh)
Thanks for any advice.
Core Data is an option, but it's not without trade offs. It's more or less a single-user solution. You can write multi-user apps with it, but unlike (say) Oracle or PostgreSQL, which are client-server from the start, you'd have to write your own server app that would marshal the client requests. It also (intentionally, by design) makes it difficult to gain direct SQL access to the underlying data store.
On the other hand, the learning curve is easy, and it's part of Cocoa and well-integrated into the standard document-based architecture.

Simplest way to develop an app that can use multiple types of databases?

I have a project for a class which requires that if a database is used, options exist for the user to pick a database to use which could be of a different type. So while I can use e.g. MySQL for development, in the final version of the project, the user must be able to choose a database (Oracle, MySQL, SQLite, etc.) upon installation. What's the easiest way to go about this, if there is an easy way?
The language used is up to me as long as it's supported by the department's Linux machines, so it could be Java, PHP, Perl, etc. I've been researching and found info on ODBC, JDBC, and SQLJ (such as this) but I'm quite new to databases so I'm having a hard time figuring out what would be best for my needs. It's also possible there may not be a simple enough way to do this; the professor admitted he's not a database guy either and he seemed to think it would be easy without having a clear idea of what it would take.
This is for a web app, but it ought to be fairly straight forward, using for example HTML and Javascript on the client side and Java with a MySQL database on the server side. No mention has been made of frameworks so I think they're too much. I have the option of using Tomcat or Apache if necessary but the overall idea is to keep things simple, and everything used should be able to be installed/changed/configured with just user level access. So something like having to recompile PHP to use ODBC would be out, I think.
Within these limitations, what would be the best way (if any) to be able to interact with an arbitrary database?
The issue I think you will have here is that SQL is not truely standard. What I mean is that vendors (Oracle, MySQL etc) have included types and features that are not SQL standard in order to "tie you in" to their DB, such as Oracle's VARCHAR2 and so on.
When I was at university, my final year project was to create an application that allowed users to create relational databases using JDBC with a Java front-end.
The use of JDBC was very simple but the issue was finding enough SQL features/types that all the vendors have in common. So they could switch between them without any issues. A way round this is to implement modules to deal with vendor specific issues and write ways to translate between them. So for example you may develop a database for MySQL with lots of MySQL specific code in that, but then you may want to use Oracle and then there are issues, which you would need to resolve.
I would spend some time looking at what core SQL standard all the vendors implement and then code for these features. But I think the technology you use wouldn't be the issue but rather the SQL you create.
Hope this helps, apologies if its not helpful!
Well, you can go two ways (in Java):
You can develop your own classes to work with different databases and load their drivers in JDBC. This way you will create a data access layer for yourself, which takes some time.
You can use Hibernate (or other ORMs). This way Hibernate will take care of things for you and you only have to know how to use Hibernate. Learning Hibernate may take some time, but when you get used to it, it can be very useful for your future projects.
If you want to stick Java there Hibernate (which wouldn't require a framework). Hibernate is fairly easy to use. You write HQL which gets translated to the SQL needed for the database you're using.
Maybe use an object relational mapper (ORM) or database abstraction layer (DAL). They are designed to provide a standard API to multiple database backends, making it possible to switch between different backends with minimal or no changes to your code. In Python, for example, a popular ORM is SQLAlchemy, and an excellent DAL is the web2py DAL (it's part of the web2py framework but can be used as a standalone DAL outside the framework as well). There are many other options in other languages as well.
use a framework with database abstraction layer and orm . try symfony or rails
There are a lot of Object relational database frameworks, unless you prefer jdbc. For simple/small applications this should work fine.

Database efficiency

I am about to write a program to keep track of my school assignments and I was wondering what database language would be the most efficient and simple to implement to track the meta-data of the assignments? I am thinking about XML, but it would require several documents.
I (currently) have at least ten assignments per week for 45 weeks. The data that has to be stored includes name, issue date, due date, path, and various states of completion. What ever language it's in would have to be able to take a large increase in both the number of assignments and the amount of meta-data without having to make large changes in either the format or the retrieval system.
Quite frankly, if you pick a full-fledged database you run the risk of spending more time on data entry than you do on your homework. If you really need to keep track of this, I would seriously recommend a spreadsheet.
First, I think you are confusing a relational database system with a database language. In all likelihood, you will be using a database that uses SQL. From there, you will need to another programming platform to build an application around. If you wanted, you could use an Microsoft Access database that allows you to build a simple front-end that is stored in the same file as the database. In this case you would be programming with VBA.
Pretty much any more database system would be suitable for your needs, even Access handle orders of magnitude more work than you are describing.
Some possible database systems are, again, Microsoft Access, Microsoft SQL Server Express, VistaDB, SQLite (probably the best choice after access for your needs), and of course there are many others.
You could either build a web front end or a desktop; I assume you are using Windows. You could use Visual Studio C# Express for this if you wanted. Or you could go with VB.NET, VB6, or what have you.
My answer isn't directly related, but as you are designing your database structures you might want to take at some of the the objects in the SIF specification in particular look at the Assignment and GradingAssignment objects.
As for how to store the data, you could use a rdbms (sqlite, mysql) or perhaps key-value database (zodb, link).
Of course, if this is just a small personal project you could just serialize the data to something like xml, json, csv or whatever and storing it as a file. It might be better in the long run to use a database though. A database format will probably scale a lot easier.
I would recommend Oracle Express (With Application Express) It will scale up to 4gb of user data. Beyond that, you would have to start paying. Application Express is very simple and build CRUD applications for, which is what is sounds like yours is.
For a project like that I would use Sqlite or Mysql, it's be fast enough. Plus it's easy to setup.

File database suggestion with support for multiple concurrent users

I need a database that could be stored network drive and would allow multiple users (up to 20) to use it without any server software.
I'm considering MS Access or Berkeley DB.
Can you share your experience with file databases?
Which one did you use, did you have any problems with it?
I really don't think that file-based databases can scale past half a dozen users. The last time I had an Access database (admittedly this was quite a while ago) I had to work really hard to get it to work for 8-9 people.
It is really much easier to install Ubuntu on an old junk computer with PostgreSQL or MySQL. That's what I had to do even when I kept my Access front-end.
I would suggest SQLite because the entire database is stored in a single file, and it quite safely handles multiple users accessing it at the same time. There are several different libraries that you can use for your client application and there is no server software needed.
One of the strengths is that it mimics SQL servers so closely that if you need to convert from using a database file to a full-fledged SQL Server, most of your queries in your client won't need to change. You'll just need to migrate the data over to the new server database (which I wouldn't be surprised if there are programs to convert SQLite databases to MySQL databases, for example.)
Beware of any file based database, they are all likely to have the same problems. Your situation really calls for a Client/Server solution.
From SQLite FAQ
A good rule of thumb is that you
should avoid using SQLite in
situations where the same database
will be accessed simultaneously from
many computers over a network
filesystem.
http://www.sqlite.org/whentouse.html
Access can be a bitch. Ive been in the position where i had to go around and tell 20-50 people to close access so I could go to "design mode" to change the design of the forms and maybe a column. No fun at all. (Old access, and it might just be a bad setup)
Ayende was recently trying to make a similar decision, and tried a bunch of so-called embedded databases. Hopefully his observations can help you.
I have been using Access for some time and in a variety of situations, including on-line. I have found that Access works well if it is properly set up according to the guidelines. One advantage of Access is that it includes everything in one package: Forms, Query Building, Reports, Database Management, and VBA. In addition, it works well with all other Office applications. The Access 2007 runtime can be obtained free from here, which makes distribution less expensive. Access is certainly unsuitable for large operations, but it should be quite suitable for twenty users. EDIT: Microsoft puts the number of concurrent users at 255.
Can Access be set up to support 10-20 users? Yes. It, as well as all file-based databases use the file system for locking and concurrency control, however. And, Access data files are more susceptible to database corruption than are database servers. And, while you can set it up for this, you MUST, as David Fenton mentions above, follow best practices, if you want to end up with a reliable system.
Personally, I find that, given the hoops that you need to jump through to ensure that an Access solution is reasonably trouble-free, it is much less trouble to implement an instance of MSDE/SQL Server Express, or postgreSql.
Berkeley DB supports a high degree of concurrency (far more then 20), but it does so primarily by utilizing shared memory and mutexes (possibly even replication) - facilities that do not work well when BDB is deployed as a file stored on a network drive.
In order to take advantage of DBD concurrency capabilities you will have to build an application around it.
The original question makes no sense to me, in that the options don't belong together. BerkeleyDB is a database engine only, while Access is an application development tool that ships with a default file-based (i.e., non-server) database engine (Jet). By virtue of putting Access with Berkeley, it seems obvious that what is needed is only a database engine, and no application at all, but how end users use Berkeley DB without a front end, I don't know (I've only used it from the command line).
Those who cannot run a Jet MDB with 20 simultaneous users are simply not competent to be giving advice on using Jet as a data store. It is completely doable as long as best practices are followed. I would recommend in addition to Microsoft's Best Practices web page, Tony Toews's Best Practices, and Tony's Corruption FAQ (i.e., things you want to avoid doing in order to have a stable application).
I strongly doubt that the original questioner is building no front end application, but since he doesn't indicate what kind of front end is involved, it's hard to recommend a back end that will go with it. Access has the advantage of giving you both parts of the equation, and when used properly, is perfectly reliable for multiple users.

Resources