The main scope of the question is to identify what/how non-web based applications aka an executable stores information.
Surely, web-based applications use databases.
Does a non-web based app store information in a compiled of a database module?
For example, if we have postgres, would we compile the postgres source and use it in someway with a driver to store info locally?
If not how is information stored? Are databases only for web-based apps? Why would someone compile/build/make the source of a DB?
TLDR: An example situation;
We have a non-web based game, where exactly do you store character stats, progress, encounters etc. do you use a database for this? If not how?
There are lots of options:
Static files (json, yaml, xml, .ini, custom text formats, ...) read into memory, modified, and written out periodically and/or on exit. Care is taken to write the new file, then rename it to overwrite the old one.
Embedded SQL databases like SQLite, Firebird, Microsoft JET, HSQLDB, Derby, etc
Embedded key/value stores like Berkely DB
Standlone SQL databases, bundled with the app installer, like PostgreSQL, MySQL, MS-SQL, etc. Or the same DBs installed separately by the user, where the app is then configured to use an existing DB.
System configuration databases like the Windows Registry. Not suitable for storing data that changes a lot, is updated frequently, or for storing a lot of data. Don't do this please.
Platform and language specific facilities like Java or Swift object serialization. Best avoided, but they have their place.
Various wacky custom formats and schemes
It's completely unrelated to whether you compile sources, etc. Most embedded databases are available as a shared library (DLL, dylib, etc) with headers. You might link them to your program at compile time, but only in the same way you link the database drivers of some other DBs (or driver frameworks like ODBC) into your app.
No matter what you actually use, in most cases data gets stored in the desktop user's profile, or in a mobile app data store sandbox. The main exception is DB servers
Related
I'm building online examination system and i want to use accdb for database. My question is will accdb allow multiple students to read the test from the database?
not clear what you mean by on-line? Do you mean you are building a web based applcation of some sort?
Access does allow multiple users to work against the database at the same time. This typically means that you split the database into a front end (application part) and the back end - a accDB with just the tables.
And you can use the access database (engine) to drive data for a web site. However, you don't and can't use VBA or the access forms for this purpose.
Since you ONLY useing the tables and not anything else releated to Access when you build such a on-line system? Then in most cases, little if any reason exists to use the access data engine, since any of the business code and UI will be built with different tools then Access. And every web site hosting these days offers MySQL, or SQL server or some other database.
These other database systems are far more approcilate and work better with multi-user operations.
So, keep in mind when you use the term "Access", you are talking about two parts:
The data base engine - (ACE, or previous called JET). This data engine (like most database engines) does not provide any UI, reporting or anything, but is just a system to hold the tables and data. You then write the web based "on-line" software with asp.net, or whatever technology stack that you are familier with.
The 2nd part of access is the so called IDE (integrated development enviroment). That part of access lets you write code, build forms, and build reports. However, that part is not web based, and thus using the term "on-line" does not really apply here. Access IDE when used is strictly 100% desktop only.
In fact, often, if you need a multi-user applcation, we often will build + use MS-Access to build that windows desktop software, but will still choose to use a server based database system like SQL server or mySQL.
So, access alone does not really give rise to a "on-line" system, and that would require that you adopt a set of web based development tools.
I was thinking - let's take a look at a computer game of any kind, or any program in general.
(Chrome, Skype, Warcraft,...)
They need to save some things that a user wanted them to save.
How do they do it?
Do they save it in a simple text file, or do they pack a database system (like MySQL,...) with themselves?
That really depends on your needs. If you only need to store some key value pairs, an application can use a simple text file (e.g. an *.ini file) That however is a plain text file readable by everybody.
An application can of course also use a database like MySql, MS SQL. However, these are not very handy if you want to distribute your application as they run as a seperate service on a server and need to be installed seperately. Then, there are databases like Sqlite which is also a SQL database, but which stores everything inside a single file. Your application just needs a way to interact with this file.
Yet another way would be to serialize/deserialize an object which holds your data you want to store.
There are other ways to store data, like NoSQL databases. I personally haven't used one of those yet, but here is a listing of some of them: http://nosql-database.org/
XML could also be used.
There are endless way an application can store its data
There is literally no end to the ways programs will store data. OTOH:
home-made archive formats: every game company seems to have a few of their own (Blizzard MoPaQ,
XML files: usually used for simple configuration (Apple's plist files, Windows application configurations, Skype's user preferences, ...)
SQLite databases: usually used for larger amounts of personal data (Firefox: bookmarks, history, etc.; iOS personal information databases, etc.)
"In the cloud" in someone else's database (basically all web apps)
Plain text or simple text formats (Windows .ini/.inf, Java MANIFEST.MF, YAML, etc.)
...
A single program might use multiple methods depending on what they're storing. There is no unified solution, and there is no one solution that is right for every task since every system has tradeoffs (human-readability vs. packing efficiency, random access vs. sequential archive, etc.)
A lot of programs use Sqlite to store data (http://www.sqlite.org). Sqlite is a very compact cross platform SQL database. Many programs do use text files.
Problem
I have a custom data source (an inhouse system) which I would like to access as a standard data source. I am looking for a solution to provide standard SQL-like accessors so that the data source can be used from different report engines, Excel, MS access, maybe standard web frontends and off-the-shelf data management tool. In other words, I would like off-the-shelf support for ODBC, JDBC and whatnot, without having to implement support for all these drivers myself.
What I have been doing so far
I have successfully used the SQLite virtual table mechanism to provide access to the data source using a standard SQLite driver. SQLite will take care of the SQL query parsing, table metadata translation (provided by my extension) and manage the SQL parts that my data source does not support (aggregates, complex joins and updates, etc).
However, what I don't get with SQLite is network support. SQLite is an embedded database engine which works very well with my data source, but although it has ODBC and JDBC support, it has no wire protocol. Embedding my custom data source in the client process is not an option, since the data source has very strict runtime restrictions (among other restrictions).
What I am considering
1. Networked SQLite
The obvious solution is to look at if it is possible to network the SQLite data source. However, the network options does not seem well supported, especially not with client drivers (i.e. not at all).
2. MySQL storage engine
I have been looking at replacing the SQLite virtual table driver with a MySQL storage engine (30 minutes of reading API specs gives me a gut feeling the APIs are quite similar). I have three concerns:
Process control. My data source is a system which wants to manage its own processes. I would prefer to be the one responsible for service provisioning.
Running the whole MySQL server looks like overkill from an IT adminstration point of view. An embedded networked server would suffice. I already got the network server (it's already a web service process).
Licensing. MySQL looks like GPL or expensive. I did not find anything conclusive on what license requirements this setup would force me into.
3. Mimicking a known network protocol
I have been looking into mimicking "known" protocols such as MySQL wire protocol or MSSQL (freeTDS is a good source). However the amount of readily available solutions look scarce, and I might have to roll my own if I go down this path, which is probably a lot of work.
I am looking for other options on how to do this. Right now, I am investigating if it is possible to choose #2 and use an interface between my data source and the storage engine (e.g. 0mq or some network protocol). I believe it is doable, but I am very interested in easier solutions. Has anyone out there done something similar (with success)?
I've been struggling with this issue for a while. Our company servers lack any sort of database, i.e. no MySQL, MongoDB, etc in sight.
Since we can't install any for reasons beyond the scope of this question, I was wondering if there was any alternative to that that I could use to save data from a form. (We collect prospect data through a form on our site which then sends this data in the form of an email and is plugged in our internal database through email2DB...)
You could use a library like SQLite
You could also use indexed files like Gdbm
However, you should think about backup strategies. Perhaps serialization should be a concern (and using textual or portable data formats like XDR, ASN1, JSON, YAML, ...).
But you might also try to discuss with managers to install e.g. a MySQL server on a machine. You don't need a dedicated hardware for that, it can run (at least for development and test) on a machine used for some other things.
textfile?:)
or perhaps TinySQL?
You can save it as a flat file. Flat files work great when you are just saving things like logs, or output from a webform. They quickly start to fail if you have any *-to-many relationships.
Do you have access to PHP?
I am developing an application completely written in C. I have to save data permanently somewhere. I tried file storage but I feel its really a primitive manner to do the job and I don't want to save my sensitive data in a simple text file. How can i save my data and access it back in an easy manner? I come from JavaScript background and would prefer something like jsons. I will be happy with something like postgreSQL also. Give me some suggestions. I am using gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3.
sqlite seems to meet your requirements.
SQLite is an embedded SQL database engine. Unlike most other SQL
databases, SQLite does not have a separate server process. SQLite
reads and writes directly to ordinary disk files. A complete SQL
database with multiple tables, indices, triggers, and views, is
contained in a single disk file. The database file format is
cross-platform - you can freely copy a database between 32-bit and
64-bit systems or between big-endian and little-endian architectures.
These features make SQLite a popular choice as an Application File
Format. Think of SQLite not as a replacement for Oracle but as a
replacement for fopen()
Check out the quickstart
http://www.postgresql.org/docs/8.1/static/libpq.html
libpq is the C application programmer's interface to PostgreSQL. libpq is a set of library functions that allow client programs to pass queries to the PostgreSQL backend server and to receive the results of these queries.
I would recommend SQLite. I think it is a great way of storing local data.
There are C library bindings, and its API is quite simple.
Its main advantage is that all you need is the library. You don't need a complex database server setup (as you would with PostgreSQL). Also, its footprint is quite small (it's also used a lot in mobile development world {iOS, android, others}).
Its drawback is that it doesn't handle concurrency that well. But if it is a local, simple, single-threaded application, then I guess it won't be a problem.
MySQL embedded or BerkeleyDB are other options you might want to take a look at.
SQLite is a lightweight database. This page describes the C language interface:
http://www.sqlite.org/capi3ref.html
SQLite is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. SQLite is the most widely deployed SQL database engine in the world. The source code for SQLite is in the public domain.
SQLite is a popular choice because it's light-weight and speedy. It also offers a C/C++ interface (including a bunch of other languages).
Everyone else has already mentioned SQLite, so I'll counter with dbm:
http://linux.die.net/man/3/dbm_open
It's not quite as fancy as SQLite (e.g, it's not a full SQL database), but it's often easier to work with from C, as it requires less setup.