setting a value marked with JET_bitColumnAutoincrement with ESE - database

I've created a column in ESE with the grbit set to JET_bitColumnAutoincrement - in normal usage this is what I want, for the value to be set to something unique by the database
however the way my database operates there are rare times when I need to set the value directly - I am 100% certain the ID I'm adding is not already in use - this is a rebuild type operation, it's not the normal case
is this possible? is there a way to both be autoincrement while keeping the ability to set it on my own?

You cannot set the value directly. Esent would have to change the way autoincrement values are implemented to support that.

Related

Is it a good practice to set default values for created and updated date columns?

I'm creating some tables and adding CreatedUtc and UpdatedUtc date columns. I previous project only set a default value for CreatedUtc.
Wouldn't make sense to also set a default value for UpdatedUtc as well?
I don't set defaults on them at the database level.
I have come across cases when importing data from external sources where we don't have a created/last mod at for entries, and it didn't make sense to donkey vote them (e.g., doing so would cause issues with existing search models).
I let my Data Access Layer (DAL) handle setting and maintaining these columns.

SQL Server: How to list changed columns with change tracking?

I use SQL Server 2012 Standard edition, and I activated Change Tracking function on a table.
When I list changes on a table with the CHANGETABLE function, I have a SYS_CHANGE_COLUMNS property with binary data
0x0000000045000000460000004700000048000000
How do I know which columns have changed ?
Because the column is a bitmask made up of the column IDs of all the columns which were changed, it's difficult to know what it's made up of. In fact, MSDN says not to interrogate SYS_CHANGE_COLUMNS directly here: https://msdn.microsoft.com/en-us/library/bb934145.aspx
This binary value should not be interpreted directly.
However, when you are detecting changes for notification purposes, usually the notification consumer has a good idea of which columns they are interested in changing.
For this use-case, use the CHANGE_TRACKING_IS_COLUMN_IN_MASK function.
-- Get the column ID of my column
declare #MyColumnId int
set #MyColumnId = columnproperty(object_id('MyTable'), 'MyColumn', 'ColumnId')
-- Check if it's changed
declare #MyColumnHasChanged bit
set #MyColumnHasChanged = CHANGE_TRACKING_IS_COLUMN_IN_MASK (MyColumnId, #change_columns_bitmask);
If CHANGE_TRACKING_IS_COLUMN_IN_MASK tell me if a column has changed,
how can I write a script that tell me which columns have changed ? I
have around 50 attributes for each table.
I'm afraid you'll need to loop through all of the columns you may be interested in... If this is too restrictive, you may have to use another change-notification approach, like Change Data Capture (CDC), or Triggers

How to store site wide settings in a database?

I'm debating three different approaches to to storing sitewide settings for a web application.
A key/value pair lookup table, each key represents a setting.
Pros Simple to implement
Cons No Constraints on the individual settings
A single row settings table.
Pros Per setting defaults and constraints
Cons - Lots of settings would mean lots of columns. Not sure if Postgres would have an issue with that
Just hard code it since the settings won't change that often.
Pros Easy to setup and add more settings.
Cons Much harder to change
Thoughts on which way to go?
Since your question is tagged with database/sql I presume you'd have no problem accessing an sql table for both lookup and management of settings... Guessing here, but I'd start with a table like:
settingName value can_be_null minvalue maxvalue description
TheAnswer 42 no 1 100 this setting does...
...
If you think about managing a large number of settings, there's more information you need about each one of them than just their current value.
I've used a key/value pair lookup table much in the way you describe with good results.
As an added bonus the table had a "configuration name" column which provided a simple way to choose/activate a specific set of configuration settings. That meant that prod, dev, and test could all live in the same table, though it was up to the application to choose which set to use. In our case a JVM argument made sense. It might make sense to store different "sets" of config settings in the same DB table; then again, it might not.
If you are thinking about file-based configuration, I like INI or YAML. You could still store it in a database, though you probably won't find an INI or YAML column type (as you might for XML).
I would go with the first option -- key/value pair lookup table. It's the most flexible and scalable solution, in my opinion. If you are worried about the cost of running many queries here and there to retrieve various config values, you could always implement some sort of cache, such as loading the whole table at once into memory. In addition to key and value, you could add columns such as "Description", and "Default Value", etc., and build a generic configuration editor that displays the Descriptions, etc., on-screen to help the user edit the config values.
I've seen some commercial applications with a single-row config table, and while I don't have direct experience doing development work against it, it struck me as much less scalable and harder to read.
Following Mike's idea, here is a script to create a table to save pairs of key/value. This integrates a mechanism (constraint) to check that the values is ok with respect to min/max/not null, and it also automatically creates a function fn_setting_XXXX() to quickly get the value of the corresponding setting (correctly casted).
CREATE TABLE settings
(
id serial,
name varchar(30),
type regtype,
value text,
v_min double precision,
v_max double precision,
v_not_null boolean default true,
description text,
constraint settings_pkey primary key(id),
constraint setting_unique unique(name),
constraint setting_type check (type in ('boolean'::regtype, 'integer'::regtype, 'double precision'::regtype, 'text'::regtype))
);
/* constraint to check value */
ALTER TABLE settings
ADD CONSTRAINT check_value
CHECK (
case when type in ('integer'::regtype,'double precision'::regtype) then
case when v_max is not null and v_min is not null then
value::double precision <= v_max and value::double precision >= v_min
when v_max is not null then
value::double precision <= v_max
when v_min is not null then
value::double precision >= v_min
else
true
end
else
true
end
and
case when v_not_null then
value is not null
else
true
end
);
/* trigger to create get function for quick access to the setting */
CREATE OR REPLACE FUNCTION ft_setting_create_fn_get() RETURNS TRIGGER AS
$BODY$
BEGIN
IF TG_OP <> 'INSERT' THEN
EXECUTE format($$DROP FUNCTION IF EXISTS fn_setting_%1$I();$$, OLD.name);
END IF;
IF TG_OP <> 'DELETE' THEN
EXECUTE format($$
CREATE FUNCTION fn_setting_%1$I() RETURNS %2$s AS
'SELECT value::%2$s from settings where name = ''%1$I''' language sql
$$, NEW.name, NEW.type::regtype );
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER tr_setting_create_fn_get_insert
BEFORE INSERT OR DELETE ON settings
FOR EACH ROW
EXECUTE PROCEDURE ft_setting_create_fn_get();
COMMENT ON TRIGGER tr_setting_create_fn_get_insert ON settings IS 'Trigger: automatically create get function for inserted settings';
CREATE TRIGGER tr_setting_create_fn_get_update
BEFORE UPDATE OF type, name ON settings
FOR EACH ROW
WHEN ( NEW.type <> OLD.type OR OLD.name <> NEW.name)
EXECUTE PROCEDURE ft_setting_create_fn_get();
COMMENT ON TRIGGER tr_setting_create_fn_get_update ON settings IS 'Trigger: automatically create get function for inserted settings';
A mixed approach is best. You have to consider what is best for each setting - which largely boils down to who would change each site-wide setting.
If you have a development server and a live server, adding new application settings can be awkward if they are solely in the db. You either need to update the database before you update the code, or have all your code handle the situation where a setting is unavailable. Obviously one common sitewide setting is the database name, and that can't be stored in the database!
You can easily end up with different settings in your test and live environments. I've taken settings away from the DB and into text files before now.
I would recommend having defaults in a 'hardcoded' file, which may then overridden by a key/value pair lookup table.
You can therefore push up new code without first needing to change the settings stored in the database.
Where there are a varying amount of values, or values that are always changed at the same time, I'd store the values as JSON or other serialised form.
Go with #1. If you want constraints based on simple types, then rather than having a simple string as a value, add a date and number field as well. The individual properties will "know" what value they want. No reason to get all meta about it.
If I had to choose, I'd go with the first option. It is easy to add/remove rows as you need. Whereas the single row could end up being a nightmare, and is probably a lot less scalable. And for option 3: It's possible you will regret hard coding your settings in the future, so you definitely don't want to box yourself in.
Although you didn't list is as an option, is XML available? It is easy to set up, and gives you slightly more options, as you can nest settings within settings.
I am including using a separate PHP script with just the settings:
$datatables_path = "../lib/dataTables-1.9.4/media";
$gmaps_utils_dir = "../lib/gmaps-utils";
$plupload_dir = "../lib/plupload-1.5.2/js";
$drag_drop_folder_tree_path = "../lib/dhtmlgoodies/drag-drop-folder-tree2";
$lib_dir = "../lib";
$dbs_dir = "../.no_backup/db";
$amapy_api_registration_id = "47e5efdb-d13b-4487-87fc-da7920eb6618";
$google_maps_api_key = "ABQIABBDp7qCIMXsNBbZABySLejWiBSmGz7YWLno";
So it's your third variant.
I don't actually see what you find hard on changing these values; in fact, this is the easiest way to administrate these settings. This is not the kind of data you want your users (with different roles) to change via web interface. Products like PHPMyAdmin and Joomla happily use this approach.
I have used a mixed approach before in which i had put all the settings (which are not likely to change) into a separate PHP file. The individual settings (which are likely to change) as a key/value pair. That way I could reduce entries from the database thereby reducing my overall query time also this helped my keep the key size small .

Always save a constant to a SQL Server field

Is it possible to force a field to always be a certain value?
We have a website in production that writes a value entered by the user into a string field. Now, our requirements have changed and we no longer actually want to save this value. For technical reasons, we don't want to do a new publish just for this if we don't have to.
What would be ideal is to ALTER the table in such a way that the field will always be NULL and that existing INSERTS and UPDATES into the field will work as normal but SQL Server will NULL the field regardless.
This is a temporary thing. We will eventually change the code to not write this value.
Just looking for a quick way to NULL the field without changing code, republishing etc...
Is this possible? Is writing a trigger the only solution?
Thanks!
You could make an INSERT and UPDATE after trigger that will NULL the field each time it is updated / inserted into.
This is the quickest and easiest solution.

Generating Primary Key with Non-Zero Index (SSIS Data Flow)

I've got a data flow task that takes a pair of tables, mashes the relevant data together, and comes out with some results to be put into an indexed table. The indexed table already has data that I'm not getting rid of and for simplicity's sake should retain their existing keys. So, I need to generate a key that starts from the highest Primary Key value already in the column.
I have found a blog post that works when starting from any known value, but this data flow will eventually be used on different databases, so that value won't be constant. It will always be the max of the column, though, but I can't find a way to grab that value using the script component suggested there.
This type of thing is notoriously difficult to do in SSIS which is why I try to avoid it. You need to:
...brace yourself...
-create a variable in your SSIS package to hold the start value
-create a SQL Task with a Parameter mapped to that variable with a direction of output and a query something like "SET ? = (SELECT MAX(IDValue) FROM Table)" - the question mark is the placeholder for the parameter which maps to the variable
-work the variable into your data flow - probably with a derived column transformation
I hope this helps...

Resources