I can use fwmkeys to iterate over keys that share a prefix -- but how can I modify these keys as I iterate? In my case I only need to be able to delete them or set them to empty, either will work.
Oh, I see. fwmkeys returns the key strings, so I can then operate directly on them that way.
Related
I have a source with 100+ columns.
I need to pass these through a script component transformation, which altersonly a handful of the columns.
Is there a simple way to allow any columns i dont modify to simply pass through the transformation?
Currently, i have to pass them all in, and assign them to the output with no changes.
This is a lot of work for 100 columns and id rather not have to do it if possible!
FYI:
There is no unique key, so i cannot split out the records using multicast and merge them after the script component.
You actually have to choose what columns you want included in your script component as either read only or read/write.
Anything you do not select as read/write simply passes through.
There are things you can do with a script task. Like add an output column to your current data flow or even create a separate data flow output.
In your case. you will want to select the handful of columns that you want to alter as read/write, then modify those columns in script and the rest will just pass through.
Is it possible to index a field and then blank it out?
The reason for this would be that I have a plain text field and a field containing the encrypted version of the text. I'd like to index the plain text, and then remove it so only the encrypted data remains.
I tried modifying the passed doc in my index function, but it doesn't seem to affect storage.
No, it is not possible to index a field and then blank it out. It is not possible by design. The views and indexes only reflect the latest version of the documents, therefore when you 'blank' a field, the corresponding view/index will also be blanked. The view/index is kept in sync and there is no option to make them diverge.
To achieve the effect you want, your map or index function would need to decrypt the encrypted field and send it to the index. However the index is not encrypted so that would probably defeat the purpose of having the encrypted field in your document in the first place.
By default - Are all oracle table names and columns stored in uppercase?
Could I change to casing?
In the data dictionary, yes, identifiers are converted to upper case by default.
You can change that behavior by creating case-sensitive identifiers. It is generally not a good idea to do so, but you can. In order to do so, you would need to enclose the table name and column names in double quotes both when you create the object and every time you want to refer to them. You'll also need to get the casing right because the identifiers will be case-sensitive unlike the normal case-insensitive behavior.
If you
CREATE TABLE "foo" (
"MyMixedCaseColumn" number
);
then the table name and column name will be stored in mixed case in the data dictionary. You'll need to use double-quotes to refer to either identifier in the future. So
SELECT "MyMixedCaseColumn"
FROM "foo"
will work. However, something like
SELECT MyMixedCaseColumn
FROM foo
will not. Nor will
SELECT "MyMixedCaseColumn"
FROM "Foo"
Generally, future developers will be grateful if you don't use case-sensitive identifiers. It's annoying to have to use double-quotes all over the place and not every tool or library has been tested against systems that use case-sensitive identifiers so it's not uncommon for things to break.
I've created a column in ESE with the grbit set to JET_bitColumnAutoincrement - in normal usage this is what I want, for the value to be set to something unique by the database
however the way my database operates there are rare times when I need to set the value directly - I am 100% certain the ID I'm adding is not already in use - this is a rebuild type operation, it's not the normal case
is this possible? is there a way to both be autoincrement while keeping the ability to set it on my own?
You cannot set the value directly. Esent would have to change the way autoincrement values are implemented to support that.
should I call lo_unlink ?
A delete didn't remove the object from pg_largeobject.
You can also clean up large objects from the command-line using
$ vacuumlo -U username databasename
Yes, you need to explicitly call lo_unlink(). I assume you just DELETEd the row that held a reference to it, and that will not remove the actual large object.
If you only ever reference it from the same place, you can always create a trigger to do it automatically for you.