Full Text search on a Varbinary Column - Is it possible to use a mime type rather than a file extension as my Type Column? - sql-server

I am setting up full text search on an existing database. We have Document table with the following schema:
ID int Not Null,
Data varbinary(max) Not Null
MimeType varchar Not Null
I want to use full text search on the data column, using the mimetype to specify the document type.
I was hoping it would be possible to register new types into whatever tables are used when you use the view sys.fulltext_document_types. Is this possible?

No it isn't possible. IFilters are mapped to a document extension. A new column has to be added and populated.

Related

Creating array fields in console not possible?

I wanted to add a simple repeating string field to a table in bigquery but the console would only allow me to add a field of the type "record" which then could contain a single repeating string field, but I think that would result in an ARRAY of a STRUCT with one string field.
I used the ALTER table with ADD COLUMN Shelve ARRAY and that worked.
But why does google not allow to declare a field "repeating" in the edit schema tab in the console?
By using ALTER TABLE it was possible to create an an array field:
Why is this not possible in the console?
In "Mode" choose "REPEATED".

Azure Search : Blob Metadata Field value not appearing in Indexed data

We have set Metadata on Block Blob and are able to verify the Key/Value correctly stamped on the blob.
Out of many fields that have been defined in the index, only one field value (PromotionId) is not appearing in the indexed data, as can be confirmed by executing search through the "Search explorer".
This field has actually been mapped to the key "ID" in the indexer.
And has been defined in Index.
Why is the value of this specific field not appearing in the index? Rest all metadata fields are all appearing in the index as expected.
The field mapping is not working because "ID" is specified as a sourceFieldName, but there is no ID property on the source Blob as it only exists on the Index you have defined.
This may be a bit confusing since it behaves like there is an "ID" property because the "ID" field is being populated without a field mapping. However, this is because Azure Search automatically maps the "metadata_storage_path" to whichever field is the document key when no field mapping for the document key is specified. This behavior is documented here.
If you want the PromotionId to be the document path like the ID field you can change the sourceFieldName to "metadata_storage_path" for the PromotionId field mapping. If you want to also be base64 encoded you can add the fieldMappingFunction to the field mapping as well.

Batch table/Column update in powerdesigner reading from excel using vb script

I am looking for developing a vb script which would extract all the table/columns from powerdesigner model to a excel file. After changing few properties I will be updating it to the model using vbscript. So I would like to know if there is any property of a column which can uniquely identify each column of a table. Example:- ROWID column in oracle
Does powerdesigner maintain unique id for each object created in PDM?
Most (all?) modelisation objects derive from IdentifiedObject, which has an ObjectID property, a GUID.

How do you change a data type of a table field in ms access without having to create a new form to update the table?

I have a customerID as a primary key (Autonumber) and when I put customerID on another table I left it as short text instead of obviously number data type and I only realised after I created my forms and queries. Is there a way of changing data type and still being able to update my table using the forms I created with having to delete relationship, fix and redo all my forms?
When I change the data type this message appears "Some data will be lost. the setting for the fieldsize property of one or more fields has been changed to a shorter size. if data is lost, validation rules may be violated as a result"
always create a new column and copy into it the data you're going to convert before trying the conversion, jic. new column name: customerID_bak. copy all customerID data into it and try the conversion. if something goes wrong, you still have the original data. i think you actually might not have an issue with your forms after the conversion.

How to define dynamic column families in cassandra

Here it is said, that no special effort is need to get a dynamic column family. But I always get an exception, when I try to set a value for an undefined column.
I created a column family like this:
CREATE TABLE places (
latitude double,
longitude double,
name text,
tags text,
PRIMARY KEY (latitude, longitude, name)
)
BTW: I had to define the tags column. Can somebody explain me why? Maybe because all other columns are part of the Index?
Now when inserting data like this:
INSERT INTO places ("latitude","longitude","name","tags") VALUES (49.797888,9.934771,'Test','foo,bar')
it works just fine! But when I try:
INSERT INTO places ("latitude","longitude","name","tags","website") VALUES (49.797888,9.934771,'Test','foo,bar','test.de')
I get following error:
Bad Request: Unknown identifier website
text could not be lexed at line 1, char 21
Which changes are needed so I can dynamically add columns?
I am using Cassandra 1.1.9 with CQL3 with the cqlsh directly on a server.
CQL3 supports dynamic column family but you have to alter the table schema first
ALTER TABLE places ADD website varchar;
Check out the 1.2 documentation and CQL in depth slides
CQL3 requires column metadata to exist. CQL3 is actually an abstraction over the underlying storage rows, so it's not a one-to-one. If you want to use dynamic column names (and there are lots of excellent use cases for them), use the traditional Thrift interface (via the client library of your choice). This will give you full control over what gets stored.

Resources