How to store site wide settings in a database? - database
I'm debating three different approaches to to storing sitewide settings for a web application.
A key/value pair lookup table, each key represents a setting.
Pros Simple to implement
Cons No Constraints on the individual settings
A single row settings table.
Pros Per setting defaults and constraints
Cons - Lots of settings would mean lots of columns. Not sure if Postgres would have an issue with that
Just hard code it since the settings won't change that often.
Pros Easy to setup and add more settings.
Cons Much harder to change
Thoughts on which way to go?
Since your question is tagged with database/sql I presume you'd have no problem accessing an sql table for both lookup and management of settings... Guessing here, but I'd start with a table like:
settingName value can_be_null minvalue maxvalue description
TheAnswer 42 no 1 100 this setting does...
...
If you think about managing a large number of settings, there's more information you need about each one of them than just their current value.
I've used a key/value pair lookup table much in the way you describe with good results.
As an added bonus the table had a "configuration name" column which provided a simple way to choose/activate a specific set of configuration settings. That meant that prod, dev, and test could all live in the same table, though it was up to the application to choose which set to use. In our case a JVM argument made sense. It might make sense to store different "sets" of config settings in the same DB table; then again, it might not.
If you are thinking about file-based configuration, I like INI or YAML. You could still store it in a database, though you probably won't find an INI or YAML column type (as you might for XML).
I would go with the first option -- key/value pair lookup table. It's the most flexible and scalable solution, in my opinion. If you are worried about the cost of running many queries here and there to retrieve various config values, you could always implement some sort of cache, such as loading the whole table at once into memory. In addition to key and value, you could add columns such as "Description", and "Default Value", etc., and build a generic configuration editor that displays the Descriptions, etc., on-screen to help the user edit the config values.
I've seen some commercial applications with a single-row config table, and while I don't have direct experience doing development work against it, it struck me as much less scalable and harder to read.
Following Mike's idea, here is a script to create a table to save pairs of key/value. This integrates a mechanism (constraint) to check that the values is ok with respect to min/max/not null, and it also automatically creates a function fn_setting_XXXX() to quickly get the value of the corresponding setting (correctly casted).
CREATE TABLE settings
(
id serial,
name varchar(30),
type regtype,
value text,
v_min double precision,
v_max double precision,
v_not_null boolean default true,
description text,
constraint settings_pkey primary key(id),
constraint setting_unique unique(name),
constraint setting_type check (type in ('boolean'::regtype, 'integer'::regtype, 'double precision'::regtype, 'text'::regtype))
);
/* constraint to check value */
ALTER TABLE settings
ADD CONSTRAINT check_value
CHECK (
case when type in ('integer'::regtype,'double precision'::regtype) then
case when v_max is not null and v_min is not null then
value::double precision <= v_max and value::double precision >= v_min
when v_max is not null then
value::double precision <= v_max
when v_min is not null then
value::double precision >= v_min
else
true
end
else
true
end
and
case when v_not_null then
value is not null
else
true
end
);
/* trigger to create get function for quick access to the setting */
CREATE OR REPLACE FUNCTION ft_setting_create_fn_get() RETURNS TRIGGER AS
$BODY$
BEGIN
IF TG_OP <> 'INSERT' THEN
EXECUTE format($$DROP FUNCTION IF EXISTS fn_setting_%1$I();$$, OLD.name);
END IF;
IF TG_OP <> 'DELETE' THEN
EXECUTE format($$
CREATE FUNCTION fn_setting_%1$I() RETURNS %2$s AS
'SELECT value::%2$s from settings where name = ''%1$I''' language sql
$$, NEW.name, NEW.type::regtype );
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER tr_setting_create_fn_get_insert
BEFORE INSERT OR DELETE ON settings
FOR EACH ROW
EXECUTE PROCEDURE ft_setting_create_fn_get();
COMMENT ON TRIGGER tr_setting_create_fn_get_insert ON settings IS 'Trigger: automatically create get function for inserted settings';
CREATE TRIGGER tr_setting_create_fn_get_update
BEFORE UPDATE OF type, name ON settings
FOR EACH ROW
WHEN ( NEW.type <> OLD.type OR OLD.name <> NEW.name)
EXECUTE PROCEDURE ft_setting_create_fn_get();
COMMENT ON TRIGGER tr_setting_create_fn_get_update ON settings IS 'Trigger: automatically create get function for inserted settings';
A mixed approach is best. You have to consider what is best for each setting - which largely boils down to who would change each site-wide setting.
If you have a development server and a live server, adding new application settings can be awkward if they are solely in the db. You either need to update the database before you update the code, or have all your code handle the situation where a setting is unavailable. Obviously one common sitewide setting is the database name, and that can't be stored in the database!
You can easily end up with different settings in your test and live environments. I've taken settings away from the DB and into text files before now.
I would recommend having defaults in a 'hardcoded' file, which may then overridden by a key/value pair lookup table.
You can therefore push up new code without first needing to change the settings stored in the database.
Where there are a varying amount of values, or values that are always changed at the same time, I'd store the values as JSON or other serialised form.
Go with #1. If you want constraints based on simple types, then rather than having a simple string as a value, add a date and number field as well. The individual properties will "know" what value they want. No reason to get all meta about it.
If I had to choose, I'd go with the first option. It is easy to add/remove rows as you need. Whereas the single row could end up being a nightmare, and is probably a lot less scalable. And for option 3: It's possible you will regret hard coding your settings in the future, so you definitely don't want to box yourself in.
Although you didn't list is as an option, is XML available? It is easy to set up, and gives you slightly more options, as you can nest settings within settings.
I am including using a separate PHP script with just the settings:
$datatables_path = "../lib/dataTables-1.9.4/media";
$gmaps_utils_dir = "../lib/gmaps-utils";
$plupload_dir = "../lib/plupload-1.5.2/js";
$drag_drop_folder_tree_path = "../lib/dhtmlgoodies/drag-drop-folder-tree2";
$lib_dir = "../lib";
$dbs_dir = "../.no_backup/db";
$amapy_api_registration_id = "47e5efdb-d13b-4487-87fc-da7920eb6618";
$google_maps_api_key = "ABQIABBDp7qCIMXsNBbZABySLejWiBSmGz7YWLno";
So it's your third variant.
I don't actually see what you find hard on changing these values; in fact, this is the easiest way to administrate these settings. This is not the kind of data you want your users (with different roles) to change via web interface. Products like PHPMyAdmin and Joomla happily use this approach.
I have used a mixed approach before in which i had put all the settings (which are not likely to change) into a separate PHP file. The individual settings (which are likely to change) as a key/value pair. That way I could reduce entries from the database thereby reducing my overall query time also this helped my keep the key size small .
Related
How to Create Database for Storage Room
i wanted to create via Excel or Oracle a database for a Storage room that is filled with all kinds of Computer parts and stuff. I never created something like that, so i wanted to know if you could help me out giving me an advice how to create a database for a beginner It should be possible to insert and remove parts or even update them Hope my question is readable and understandable. Thanks
A simple option to do that - not only the table so that you could write your own DML statements (to insert, update or delete rows) - but to create a nice application - is to use Oracle Application Express (Apex). Depending on database version you use, it might already be installed by default. If not, ask your DBA to install it. Alternatively, create a free account on apex.oracle.com; you'll get limited space (more than enough to do what you want to do). In Application Builder, use the Excel file you have as a "source" which will then be used by Apex's wizard to create a table in the database, as well as application, true GUI which works and looks just fine. If you don't have anything at all, not even an Excel file, well ... that's another problem and requires some more work to be done. you have to know what you want (OK, a storage room) is a single table enough to contain all information you'd want to collect? if so, which columns (attributes) do you want to collect? if not (for example, you'd want to "group" items), you'd need at least two tables which will be related to each other by the means of master-detail relationship, which also means that you'll have to create a foreign key constraint which datatypes are appropriate for certain attributes? You wouldn't store item names into number datatype, right? Nor should you put dates (when item entered the storage room) as a string in varchar2 column, but into a date datatype column etc. Basically, YMMV.
What is the conventional way to hard-code values in a database?
My application has a database table that is used to record the attendance of employees. And the column attedance_status has only three possible values - "present", "absent", "on_leave", and NULL as default. How do I add it to the database? So far I have come up with two possible ways. Create another table attendance_status with status_id and status_value and add the above values to it. And then use the id in the application for all SQL queries. Probably the bad way. Hardcode the values (maybe in a config file) and use it throughout the app's SQL queries. Am I missing the right way? How should this be approached?
Either will work, but Option 1 will give you flexibility in the event that the requirements change and is the standard data model. I would, however, name my columns a little differently. I would have id, value, name. Then the references become attendance_status.id and attendance_status.value. The third column is available for use in displays or reports or whatever. value is on_leave and name is On leave. Option 2 works provided the data input point is totally closed. If someone codes new functionality there is the risk that he or she will invent something different to mean the same thing like onLeave.
Reduce column size and also trim data, in production database, handle constraints/dependencies on same column
I have a scenario where Java developer has made the change to the variable which used to transfer the data from column - col of table - tbl. Now, I have to change the column varchar(15) to varchar(10). But, before making this change - have to handle the existing data and the constraints/dependencies on same column. What should be the best sequence of doing so? I am thinking to check the constraints first, then trim the existing data and then alter the table. Please suggest how to handle constrains/dependencies and before handling it, how to check such dependencies.
Schema-evolution (the DDL changes that happen over time to tables and columns in a database, while preserving existing data and functionality) is a well understood topic with several solutions, some of which are RDBMS independent, others are built-in to the RDBMS solution. A key requirement for production environments is to need both a forward-change and a backout, which can be run unattended. Many open source advocates use Liquibase which also has a commercial variant. Db2 for Linux/Unix/Windows also offers a built-in stored-procedure SYSPROC.ALTOBJ which helps to automate various schema-evolution alterations, including decreasing the size of a column. You would need to study its documentation carefully and test it fully on non-production environments until you are satisfied. Read about it here https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.sql.rtn.doc/doc/r0011934.html You can grow-your-own script of course, in whatever language you prefer, including SQL, but remember you should also build and test a back-out script.
SQL Server: How to list changed columns with change tracking?
I use SQL Server 2012 Standard edition, and I activated Change Tracking function on a table. When I list changes on a table with the CHANGETABLE function, I have a SYS_CHANGE_COLUMNS property with binary data 0x0000000045000000460000004700000048000000 How do I know which columns have changed ?
Because the column is a bitmask made up of the column IDs of all the columns which were changed, it's difficult to know what it's made up of. In fact, MSDN says not to interrogate SYS_CHANGE_COLUMNS directly here: https://msdn.microsoft.com/en-us/library/bb934145.aspx This binary value should not be interpreted directly. However, when you are detecting changes for notification purposes, usually the notification consumer has a good idea of which columns they are interested in changing. For this use-case, use the CHANGE_TRACKING_IS_COLUMN_IN_MASK function. -- Get the column ID of my column declare #MyColumnId int set #MyColumnId = columnproperty(object_id('MyTable'), 'MyColumn', 'ColumnId') -- Check if it's changed declare #MyColumnHasChanged bit set #MyColumnHasChanged = CHANGE_TRACKING_IS_COLUMN_IN_MASK (MyColumnId, #change_columns_bitmask); If CHANGE_TRACKING_IS_COLUMN_IN_MASK tell me if a column has changed, how can I write a script that tell me which columns have changed ? I have around 50 attributes for each table. I'm afraid you'll need to loop through all of the columns you may be interested in... If this is too restrictive, you may have to use another change-notification approach, like Change Data Capture (CDC), or Triggers
Can't change data type on MS Access 2007
I have a huge database (800MB) which consists of a field called 'Date Last Modified' at the moment this field is entered as a text data type but need to change it to a Date/Time field to carry out some queries. I have another exact same database but with only 35MB of data inside it and when I change the data type it works fine, but when I try to change data type on big database it gives me an error: Micorosoft Office Access can't change the data type. There isn't enough disk space or memory After doing some research some sites mentioned of changing the registry file (MaxLocksPerFile) tried that as well, but no luck :-( Can anyone help please?
As John W. Vinson says here, the problem you're running into is that Access wants to hold a copy of the table while it makes the changes, and that causes it to exceed the maximum allowable size of an Access file. Compacting and repairing might help get the file under the size limit, but it didn't work for me. If, like me, you have a lot of complex relationships and reports on the old table that you don't want to have to redo, try this variation on #user292452's solution instead: Copy the table (i.e. 'YourTable') then paste Structure Only back into your database with a different name (i.e. 'YourTable_new'). Copy YourTable again, and paste-append the data to YourTable_new. (To paste-append, first paste, and select Append Data to Existing Table.) You may want to make a copy of your Access database at this point, just in case something goes wrong with the next part. Delete all data in YourTable using a delete query---select all fields, using the asterisk, and then run with default settings. Now you can change the fields in YourTable as needed and save again. Paste-append the data from YourTable_new to YourTable, and check that there were no errors from type conversion, length, etc. Delete YourTable_new.
One relatively tedious (but straightforward) solution would be to break the big database up into smaller databases, do the conversion on the smaller databases, and then recombine them. This has an added benefit that if, by some chance, the text is an invalid date in one chunk, it will be easier to find (because of the smaller chunk sizes). Assuming you have some kind of integer key on the table that ranges from 1 to (say) 10000000, you can just do queries like SELECT * INTO newTable1 FROM yourtable WHERE yourkey >= 0 AND yourkey < 1000000 SELECT * INTO newTable2 FROM yourtable WHERE yourkey >= 1000000 AND yourkey < 2000000 etc. Make sure to enter and run these queries seperately, since it seems that Access will give you a syntax error if you try to run more than one at a time. If your keys are something else, you can do the same kind of thing, but you'll have to be a bit more tricky about your WHERE clauses. Of course, a final thing to consider, if you can swing it, is to migrate to a different database that has a little more power. I'm guessing you have reasons that this isn't easy, but with the amount of data you're talking about, you'll probably be running into other problems as well as you continue to use Access. EDIT Since you are still having some troubles, here is some more detail in the hopes that you'll see something that I didn't describe well enough before: Here, you can see that I've created a table "OutputIDrive" similar to what you're describing. I have an ID tag, though I only have three entries. Here, I've created a query, gone into SQL mode, and entered the appropriate SQL statement. In my case, because my query only grabs value >= 0 and < 2, we'll just get one row...the one with ID = 1. When I click the run button, I get a popup that tells/warns me what's going to happen...it's going to put a row into a new table. That's good...that's what we're looking for. I click "OK". Now our new table has been created, and when I click on it, we can see that our one line of data with ID = 1 has been copied over to this new table. Now you should be able to just modify the table name and the number values in your SQL query, and run it again. Hopefully this will help you with whatever tripped you up. EDIT 2: Aha! This is the trick. You have to enter and run the SQL statements one at a time in Access. If you try to put multiple statements in and run them, you'll get that error. So run the first one, then erase it and run the second one, etc. and you should be fine. I think that will do it! I've edited the above to make it clearer.
Adapted from Karl Donaubauer's answer on an MSDN post: Switch to immediate window (Ctl + G) Execute the following statement: DBEngine.SetOption dbMaxLocksPerFile, 200000 Microsoft has a KnowledgeBase article that addresses this problem directly and describes the cause: The page locks required for the transaction exceed the MaxLocksPerFile value, which defaults to 9500 locks. The MaxLocksPerFile setting is stored in the Windows registry. The KnowledgeBase article says it applies to Access 2002 and 2003, but it worked for me when changing a field in an .mdb from Access 2013.
It's entirely possible that in a database of that size, you've got text data that won't convert to a valid Date/Time. I would suggest (and you may hate me for this) that you export all those prospective date values from "Big" and go through them (perhaps in Excel) to see which ones are not formatted the way you'd expect.
Assuming that the error message is accurate, you're running up against a disk or memory limitation. Assuming that you have more than a couple of gigabytes free on your disk drive, my best guess is that rebuilding the table would put the database (including work space) over the 2 gigabyte per file limit in Access. If that's the case you'll need to: Unload the data into some convenient format and load it back in to an empty database with an already existing table definition. Move a subset of the data into a smaller table, change the data type in the smaller table, compact and repair the database, and repeat until all the data is converted. If the error message is NOT correct (which is possible), the most likely cause is a bad or out-of-range date in your text-date column.
Copy the table (i.e. 'YourTable') then paste just its structure back into your database with a different name (i.e. 'YourTable_new'). Change the fields in the new table to what you want and save it. Create an append query and copy all the data from your old table into the new one. Hopefully Access will automatically convert the old text field directly to the correct value for the new Date/Time field. If not, you might have to clear out the old table and re-append all the data and use a string to date function to convert that one field when you do the append. Also, if there is an autonumber field in the old table this might not work because there is no way to ensure that the old autonumber values will line up with the new autonumber values that get assigned.
You've been offered a bunch of different ways to get around the disk space error message. Have you tried adding a new field to your existing table using Date data type and then updating the field with the value the existing string date field? If that works, you can then delete the old field and rename the new one to the old name. That would probably take up less temp space than doing a direct conversion from string to date on a single field. If it still doesn't work, you may be able to do it with a sceond table with two columns, the first long integer (make it the primary key), the second, date. Then append the PK and string date field to this empty table. Then add a new date field to the existing table, and using a join, update the new field with the values from the two-column table. This may run into the same problem. It depends on number of things internal to the Jet/ACE database engine over which we have no real control.