what is the SCatalog structure used for in TDengine database? - tdengine

Seems there is a catalog module in TDengine database ,but I don't know what is used for
typedef struct SCatalog {
uint64_t clusterId;
SHashObj *userCache; //key:user, value:SCtgUserAuth
SHashObj *dbCache; //key:dbname, value:SCtgDBCache
SCtgRentMgmt dbRent;
SCtgRentMgmt stbRent;
} SCatalog;
anyone could help with this?

In TDengine database 3.0 version .
The catalog module is used to store various metadata on the client side, initialized through catalogInit API. Each database service (cluster id) corresponds to a SCatalog, and each db in the SCatalog corresponds to a DbCache (stores child and supertable in hash table form), and a user cache(cache user information, stored in hash table form).
As for the SCtgRentMgmt dbRent and SCtgRentMgmt stbRent;
I need to make further investigation .

Related

Is thera a way to find forced execution context on datastore objects, somewhere in ODI metadata database?

I have a ODI 12c project with 30 mappings. I need to check if every "Component context" on every datastore object (source or target) is set to "Execution context" (not forced).
Is there a way to achive this by querying ODI underlying database so I don't have to do this manually, and to avoid possible mistakes ?
I have a list of ODI 12c Repository tables and comments on table columns which I got from the Oracle support website, and after hours of digging through database I still can't see this information stored in any table.
My package is located in SNP_PACKAGE, SNP_MAPPING has info about mapping , and SNP_MAP_COMP describes objects in mapping.
I have searched through many different tables as well.
A bit late but for anyone else looking
Messing about the tables is a no-no. APIs are better. Specially if you are to modify anything.
https://docs.oracle.com/en/middleware/data-integrator/12.2.1.3/odija/index.html
Run the following groovy script in ODI (Tools/Groovy/New Script). Should be simple enough to modify. Using the SDK gets a lot easier if you manage to set up a complete development env in IntelliJ or another Java IDE. Groovy in ODI opens up a whole new world.
//Created by DI Studio
import oracle.odi.domain.mapping.Mapping
import oracle.odi.domain.mapping.finder.IMappingFinder
tme = odiInstance.getTransactionalEntityManager()
IMappingFinder mapf = (IMappingFinder) tme.getFinder(Mapping.class)
Collection<Mapping> mappings = mapf.findByProject("PROJECT","FOLDER")
println("Found ${mappings.size()} mappings")
mappings.each { map ->
map.physicalDesigns.each{ phys ->
phys.physicalNodes.each{ node ->
println("${map.project.name}...${map.parentFolder.parentFolder?.name}.${map.parentFolder.name}.${map.name}.${phys.name}.${node.name}.defaultContext=${(node.context.defaultContext) ? "default" : node.context.name}")
}
}
}
It prints default or the set (forced) context. Seems forced context has been deprecated in 12c. Physical.node.context.defaultContext seems to mirror Component Context (Forced) in ODI Studio 12.2.1.3.
https://docs.oracle.com/en/middleware/data-integrator/12.2.1.3/odija/index.html
Update 2019-12-20 - including getExecutionContextName
The following script lists in a hierarchical manner and maybe easier to read the code. Not sure if you get what you are originally was after without having mapping with your exact setup.
//Created by DI Studio
import oracle.odi.domain.mapping.Mapping
import oracle.odi.domain.mapping.finder.IMappingFinder
import oracle.odi.domain.mapping.component.DatastoreComponent
tme = odiInstance.getTransactionalEntityManager()
String project = "PROJECT"
String parentFolder = "PARENT_FOLDER"
IMappingFinder mapf = (IMappingFinder) tme.getFinder(Mapping.class)
Collection<Mapping> mappings = mapf.findByProject(project, parentFolder)
println("Found ${mappings.size()} mappings")
println "Project: ${project}"
mappings.each { map ->
println "\tMapping: ..${map.parentFolder.parentFolder?.name}/${map.parentFolder.name}/${map.name}"
map.physicalDesigns.each{ phys ->
println "\t\tPhysical: ${phys.name}"
phys.physicalNodes.each{ node ->
println "\t\t\tNode: ${node.name}"
println "\t\t\t\tdefaultContext: ${(node.context.defaultContext)}"
println "\t\t\t\tNode context name: ${node.context.name}"
println "\t\t\t\tDatastoreComponent ExecutionContextName: ${DatastoreComponent.getDatastoreComponent(node)?.getExecutionContextName(node).toString()}"
}
}
}
Below is a list of some tables and columns that might hold the value you are looking for.
These tables and columns are from ODI 12.1.2, depending on the exact ODI version you are using, the structure could be a little different.
Here is also a query to retrieve this information directly from database.
-- Forced Contexts on Datastores in Mapping
SELECT MAPP.NAME MAP_NAME, MAPP_COMP.NAME DATASTORE_NAME,
MAPP_REF.QUALIFIED_NAME FORCE_CONTEXT
FROM SNP_MAPPING MAPP
INNER JOIN SNP_MAP_REF MAPP_REF
ON MAPP_REF.I_OWNER_MAPPING = MAPP.I_MAPPING
INNER JOIN SNP_MAP_PROP MAPP_PROP
ON MAPP_REF.I_MAP_REF = MAPP_PROP.I_PROP_XREF_VALUE
INNER JOIN ODIW12.SNP_MAP_COMP MAPP_COMP
ON MAPP_COMP.I_MAP_COMP = MAPP_PROP.I_MAP_COMP
WHERE
MAPP_REF.ADAPTER_INTF_TYPE = 'IContext'
and MAPP.NAME like %yourMapping%

Getting data from Cognos TM1 via REST API

I'm working with IBM Cognos Tm1 REST API.
I need subset of the data values contained in a cube (Cube1 for example).
So, I'm executing a view (View1 for example) and obtain a cellset.
http://server:port/api/v1/Cubes('Cube1')/Views('View1')/tm1.execute?$expand=Cells($select=Ordinal,FormattedValue,Consolidated)
However, I obtain much more cell values than I need.
My questions are:
Can I create my own view via REST API only? (And how?)
Can I ask API to return only not consolidated values?
Can I obtain cell value in some other way, without views?
Try creating a view via ExecuteMDX
Post Query:
api/v1/ExecuteMDX?$expand=Axes($expand=Hierarchies($select=Name),Tuples($expand=Members($select=Name))),Cells($select=Ordinal,Value)
And then in the Body
{
"MDX": "SELECT
SELECT {[Version].[Actual]}*
{[Year].[2017]} *
{[Location]. [1001]}*
{[Period].[Total Year]} *
{[Currency].[USD]} *
[Department].[Total Department]} *
{[Product Type].[Total Product Type]} *
{TM1FILTERBYLEVEL({TM1SUBSETALL( [Account] )}, 0)}
{[Cube1 Measure].[Amount]} ON 0 FROM [Cube1]"
}
Good Luck!
You create dynamic views using TM1 Java APIs. You can find detailed documentation in \tm1_64\TM1JavaApiDocs\
or by default its
C:\Program Files\ibm\cognos\tm1_64\TM1JavaApiDocs
and sample codes are located in C:\Program Files\ibm\cognos\tm1_64\tm1api\samplecode\java
Hope this helps you.

How to decode OLAP Query?

I am totally fresher to OLAP server. i have a OLAP query that is working fine, i just want to know, which tables are linked to send the result and how(i mean with which joins). Here is query.
WITH MEMBER [Measures].[ThisYearMonthToDate] AS 'Sum({[Time].[All Time].[2013].
[Q1].[January],[Time].[All Time].[2013].[Q1].[February],[Time].[All Time].[2013].
[Q1].[March],[Time].[All Time].[2013].[Q2].[April],[Time].[All Time].[2013].[Q2].[May]},
[Measures].[Main Temp Id])'MEMBER [Measures].[LastYearMonthToDate] AS
'Sum({[Time].[All Time].[2012].[Q1].[January],[Time].[All Time].[2012].[Q1].[February],
[Time].[All Time].[2012].[Q1].[March],[Time].[All Time].[2012].[Q2].[April],
[Time].[All Time].[2012].[Q2].[May]}, [Measures].[Main Temp Id])' SELECT {[Measures].
[LastYearMonthToDate], [Measures].[ThisYearMonthToDate]} ON COLUMNS,
{([PublicRegion].[All Regions].[USA]),([PublicRegion].[All Regions].[USA].[Northeast]),
([PublicRegion].[All Regions].[USA].[Midwest]),([PublicRegion].[All Regions].[USA].
[Southeast]),([PublicRegion].[All Regions].[USA].[Southwest]),([PublicRegion].[All
Regions].[USA].[West Coast]),([PublicRegion].[All Regions].[USA].[Misc]),
([PublicRegion].[All Regions].[Europe]),([PublicRegion].[All Regions].[Europe].[UK]),
([PublicRegion].[All Regions].[Europe].[France]),([PublicRegion].[All Regions].
[Europe].[Italy]),([PublicRegion].[All Regions].[Europe].[Germany]),
([PublicRegion].[All Regions].[Europe].[Spain]),([PublicRegion].[All Regions].
[Canada]),([PublicRegion].[All Regions].[Other])} ON ROWS FROM Public
i am not getting how to decode this query. Please help me..
There are two pretty easy ways to find out:
Log of you OLAP server: I'm almost sure that all leading OLAP tools logs SQL queries sent to database server.
Log of your database server: Set your database to log all queries from all users. By time of execution, and user name you declared in metadata file, you can easily filter queries sent by OLAP tool.
Hope this helps,
Best regards

pull Drupal field values with db_query() or db_select()

I've created a content type in Drupal 7 with 5 or 6 fields. Now I want to use a function to query them in a hook_view call back. I thought I would query the node table but all I get back are the nid and title. How do I get back the values for my created fields using the database abstraction API?
Drupal stores the fields in other tables and can automatically join them in. The storage varies depending on how the field is configured so the easiest way to access them is by using an EntityFieldQuery. It'll handle the complexity of joining all your fields in. There's some good examples of how to use it here: http://drupal.org/node/1343708
But if you're working in hook_view, you should already be able access the values, they're loaded into the $node object that's passed in as a parameter. Try running:
debug($node);
In your hook and you should see all the properties.
If you already known the ID of the nodes (nid) you want to load, you should use the node_load_multiple() to load them. This will load the complete need with all fields value. To search the node id, EntityFieldQuery is the recommended way but it has some limitations. You can also use the database API to query the node table for the nid (and revision ID, vid) of your nodes, then load them using node_load_multiple().
Loading a complete load can have performance impacts since it will load way more data than what you need. If this prove to be an issue, you can either try do directly access to field storage tables (if your fields values are stored in your SQL database). The schema of these tables is buld dynamicaly depedning on the fields types, cardinality and other settings. You will have to dig into your database schema to figure it out. And it will probably change as soon as you change something on your fields.
Another solution, is to build stub node entities and to use field_attach_load() with a $options['field_id'] value to only load the value of a specific field. But this require a good knowledge and understanding of the Field API.
See How to use EntityFieldQuery article in Drupal Community Documentation.
Creating A Query
Here is a basic query looking for all articles with a photo that are
tagged as a particular faculty member and published this year. In the
last 5 lines of the code below, the $result variable is populated with
an associative array with the first key being the entity type and the
second key being the entity id (e.g., $result['node'][12322] = partial
node data). Note the $result won't have the 'node' key when it's
empty, thus the check using isset, this is explained here.
Example:
<?php
$query = new EntityFieldQuery();
$query->entityCondition('entity_type', 'node')
->entityCondition('bundle', 'article')
->propertyCondition('status', 1)
->fieldCondition('field_news_types', 'value', 'spotlight', '=')
->fieldCondition('field_photo', 'fid', 'NULL', '!=')
->fieldCondition('field_faculty_tag', 'tid', $value)
->fieldCondition('field_news_publishdate', 'value', $year. '%', 'like')
->fieldOrderBy('field_photo', 'fid', 'DESC')
->range(0, 10)
->addMetaData('account', user_load(1)); // Run the query as user 1.
$result = $query->execute();
if (isset($result['node'])) {
$news_items_nids = array_keys($result['node']);
$news_items = entity_load('node', $news_items_nids);
}
?>
Other resources
EntityFieldQuery on api.drupal.org
Building Energy.gov without Views

How to use MS Sync Framework to filter client-specific data?

Let's say I've got a SQL 2008 database table with lots of records associated with two different customers, Customer A and Customer B.
I would like to build a fat client application that fetches all of the records that are specific to either Customer A or Customer B based on the credentials of the requesting user, then stores the fetched records in a temporary local table.
Thinking I might use the MS Sync Framework to accomplish this, I started reading about row filtering when I came across this little chestnut:
Do not rely on filtering for security.
The ability to filter data from the
server based on a client or user ID is
not a security feature. In other
words, this approach cannot be used to
prevent one client from reading data
that belongs to another client. This
type of filtering is useful only for
partitioning data and reducing the
amount of data that is brought down to
the client database.
So, is this telling me that the MS Sync Framework is only a good option when you want to replicate an entire table between point A and point B?
Doesn't that seem to be an extremely limiting characteristic of the framework? Or am I just interpreting this statement incorrectly? Or is there some other way to use the framework to achieve my purposes?
Ideas anyone?
Thanks!
No, it is only a security warning.
We use filtering extensively in our semi-connected app.
Here is some code to get you started:
//helper
void PrepareFilter(string tablename, string filter)
{
SyncAdapters.Remove(tablename);
var ab = new SqlSyncAdapterBuilder(this.Connection as SqlConnection);
ab.TableName = "dbo." + tablename;
ab.ChangeTrackingType = ChangeTrackingType.SqlServerChangeTracking;
ab.FilterClause = filter;
var cpar = new SqlParameter("#filterid", SqlDbType.UniqueIdentifier);
cpar.IsNullable = true;
cpar.Value = DBNull.Value;
ab.FilterParameters.Add(cpar);
var nsa = ab.ToSyncAdapter();
nsa.TableName = tablename;
SyncAdapters.Add(nsa);
}
// usage
void SetupFooBar()
{
var tablename = "FooBar";
var filter = "FooId IN (SELECT BarId FROM dbo.GetAllFooBars(#filterid))";
PrepareFilter(tablename, filter);
}

Resources