target_schema ignored by dbt on SQL Server - sql-server

I have been trying to add a target_schema in the dbt_project.yml as well as in the model file itself.
models:
project_name:
model_name:
+target_schema: new_schema
To my understanding and what I read here in the official documentation, this should work. But it is ignored instead and shows up as the standard dbo. I created the schema by hand to make sure it exists and if I set a schema it creates one, in the format dbo_schema just as described in the documentation. But the target_schema keeps getting ignored?
Is this simply not supported by SQL Server and dbt?

There is no config called target_schema. The docs you link to use that name for the schema config defined in your active target, which is configured in your profiles.yml file:
# profiles.yml
my_profile:
target: dev # this is the default target
outputs:
dev:
schema: dbo # this is what the docs call target_schema
On an individual model (or a directory of models), you can additionally set a config that is also just called schema. This sets what the docs call a custom_schema. This config is read from dbt_project.yml, a "properties" .yml file, or from a {{ config() }} block in a model file.
# dbt_project.yml
models:
project_name:
model_name:
+schema: new_schema # this sets what the docs call custom_schema
The above two config files together will materialize model_name to a schema called dbo_new_schema, as explained in the docs that you link to, since the default behavior is <target_schema>_<custom_schema>.
If you just want to materialize all of your models to new_schema, then change the value in your profiles.yml file, and do not set a schema config in your dbt_project.yml or anywhere else. If you only want some models materialized to new_schema, then I recommend sticking with the default convention for custom schemas, since that'll generalize better to multiple environments (for developers, QA, etc.). Finally, if that really isn't what you want, the docs describe how to override the custom schema name-generating behavior.

Related

how to use --selector & --defer in getdbt. Please share some examples

I am using getdbt on redshift for data analytics operation. Can anyone please suggest, how to use --selector & --defer with "dbt run" commands.
What is the syntax ? What is the use of selectors.yml file?
Please share some examples.
Thanks
My interpretation of defer is a way to utilize the dbt cli to work with unbuilt or differential versions of the current & futures state defined versions of a model.
Example of why you may want to interact with that here: #2740 - Automating Non Regression Test
selectors being a relatively new feature, I also haven't seen much documentation to back this up but it is effectively a naming convention for a set of logical criteria (more than 1 tag, multiple directories, etc.)
I'd recommend this article in general for understanding the build path generation of a typical dbt run: How we made dbt runs 30% faster
From there, you can imagine that within a large project, there are huge interconnecting chains for each raw -> analytics ready transformation pipeline that you have.
We'll use Gitlab's open dbt project as an example.
Gitlab doesn't currently use selectors but they do make use of tags.
So they could build up a selectors.yml file using logical definitions like:
selectors.yml
selectors:
- name: sales_funnel
definition:
tag: salesforce
tag: sales_funnel
- name: arr
description: builds all arr models to current state + all upstream dependencies (zoho, zuora subscriptions, etc.)
default: true
definition:
tag: zuora_revenue
tag: arr
- name: month_end_process
description: builds reporting models about customer segments based on subscription activity for latest closed month
definition:
- union:
- method: fqn
value: rpt_available_to_renew_month_end
greedy: eager # default: will include all tests that touch selected model
- method: fqn
value: rpt_possible_to_churn_month_end
greedy: eager
Full list of valid selector definitions here: https://docs.getdbt.com/reference/node-selection/yaml-selectors#default
What that gives them the ability to do is on a cron job, via airflow, or some other orchestrator simply execute:
dbt run --selector month_end_process --full-refresh
And have confidence that the logical selection of models to run for that process is 100% accurately reproduced instead of another more fallible approach like assuming that all the models needed are in a single directory:
dbt run --models marts.finance.restricted_safe.reports --full-refresh
Architecturally, you likely won't need selectors until you get to the level of having multiple layers of tags and / or multiple layers of use-case directories to be mindful of within a single run.
Example: tags for the models' function, tags for the sources, tags for the bi/analyst consumers, tags for the materialization schedule, etc.

Best configuration and parameters for ctags in a CakePHP project

What is the best configuration and parameters for ctags in a CakePHP project?
I want to be able to auto-complete ctp files, Components, Behaviours, Models and Helpers?
Check these github repositories, I have found then and they are so good for work with php and cakephp
https://github.com/amix/vimrc
https://github.com/ndreynolds/vim-cakephp
This solution requires 1 line in your .ctags file and two lines in your .vimrc file, so it's fairly minimal.
tl;dr
.ctags:
--langmap=php:+.ctp
.vimrc:
# Controller -> Component
map <leader>t yiw<cr>:tag /^<C-R>"<CR>
# View -> Helper
map <leader>h yiw<cr>:tag /^<C-R>"Helper<CR>
Add Views to your tags
This solution is mostly for jumping between files. I'll try and add auto-completion at a later date.
Add this to your ~/.ctags options file to include CakePHP views as PHP files:
--langmap=php:+.ctp
Then I'm assuming you've done ctags -R . at the root of your project (that's what I've done at least). This out of the box should pick up PHP syntax and class definitions.
Auto-completion (general)
I found the auto-completion (omni-completion from Ctrl+XCtrl+O) doesn't work very nicely with PHP, e.g. if I type $this-> and then try to auto-complete it doesn't find any tags.
The fix for this was to use install phpcomplete.vim. This will find methods within your class.
However that won't auto-complete connected models.
Models
By default ctags should work for all Controller -> Model jumping as the Model name is the same as the class name.
Behaviors
These again should be fine as you don't specify the name of the behavior you just have the method name which depending on how independent the name is it should get found - or at least it will be in the list of tags.
Components
There's no direct way of mapping these, I couldn't see a way of mapping them through the ctags --regex options. ctags recognises that they are classes but doesn't know the xxx -> xxxComponent mapping.
However there is one slight trick. You can do a tag search on the beginning of the class name (source)
:tag /^Email
will find
class EmailComponent
You can then map this in your .vimrc
map <leader>t yiw<cr>:tag /^<C-R>"<CR>
This copies the word that you've got the cursor over and then pastes it into the tag command and executes it. My leader is set to ,, so I can type ,t and it takes me to the corresponding component under the cursor.
Helpers
Ok, another slight hack in the .vimrc file:
map <leader>h yiw<cr>:tag /^<C-R>"Helper<CR>
Using ,h, this will jump you from $html->... to
class HtmlHelper extends AppHelper {
But it doesn't work for functions inside e.g. if your cursor is over script in $html->script, it will not take you to the HtmlHelper script method. So it's a work in progress.

Unset values in bulk uploader in App Engine

Using bulkloader in App Engine, I can get properties set to certain values or to None (or null value). I can also leave them unset if I don't include the property in bulkloader.yaml.
What I would like to do is set the property for some of the entities and leave the property unset for some other entities. Is there a way to do this?
There's no way to do this with the standard YAML configuration of the bulkloader. Note, though, that most model frameworks, including the Python one built in to App Engine, will create any missing properties when you first write a record with them, so there's not much point in going out of your way to leave them unspecified.
You can do this with a post_import_function.
Let's say you have a string property called "notes" that should be omitted if empty:
def post_process_entity(input_dict, instance, bulkload_state):
if instance['notes'] == '':
del instance['notes']
return instance

Clearcase: checkout and modify but forbid checkin

Is it possible in clearcase to checkout a file for modification such that it is impossible to check it back in? I’m going to be hacking some files on a private branch, only some of which I want to ever check in. I want to eliminate the possibility of accidentally checking in unwanted changes. (I know we can write a trigger to check for magic keywords in the checkout comment; I'm look for something built-in to CC.)
"Hacking some files" is spelled in ClearCase lingo: hijacked files in a snapshot view.
All you have to do is to:
lock those files (except for the few developers you know are likely to checkout/checkin the files: cleartool lock -nusers userA,userB,... aFile)
create a snapshot view
change the read/write right (at the OS level, nothing to do with ClearCase here)
modify them directly (without checkout them first, hence the "hijacked" state)
The OP Kevin Little adds in the comment:
Alas, we only use dynamic views
Easy enough:
"Hacking some files" is also spelled in ClearCase lingo: eclipsed files in a dynamic view.
All you have to do is to:
lock those files (except for the few developers you know are likely to checkout/checkin the files: cleartool lock -nusers userA,userB,... aFile)
create a dynamic view
copy the files you need to modify as aFile.tmp
modify the config spec to not select them
copy them back to their original name (they became "eclipsed" as their private version override their official versioned counterpart)
remove the "none" selection rules from the config spec
modify them directly
To not select them, add to the config spec (ct edcs) before the other rules:
element /a/path/to/aFile1 -none
element /a/path/to/aFile2 -none
...
To restore them, all you have to do is move or remove those files.
They will be dynamically be replaced by their original and still versioned element.
I don't know about the administration. From a user standpoint, you could have 2 views. In one view, checkout the files you don't want to check in. In the other view (your view), check them out unreserved. Then, if you try to check them in, you'll get an error because they're checked out to the other view.

How to specify stream/project in ClearCase snapshot view load rules?

How to specify load rules in this case?
Previously discussed in How do I create a snapshot view of some project or stream in ClearCase?
When you create a UCM snapshot view, you reference the stream at the creation:
cleartool mkview -snap -tag myView_myStream_snap -stream myStream#\myPVob -stg myStorge myRootDir
Note: "myView_myStream_snap" is a convention of mine for naming a UCM snapshot view using the stream "myStream". You can actually name that snapshot view with whatever naame you want.
The load rules are only there to specify what to load within a snapshot view whatever the selection rules are (the "element ..." rules which are before the load rules)
load /myVob/dirA
load /myVob/dirB/dirB1
load /myVob/dirB/dirB2
There is no notion of stream or projects here.
The stream represents the "configuration" (i.e. the list of labels referencing some files)
The load rules represent what you want to load, without making any assumptions on the exact version selected
The combination of the two (the select rules based on the stream + the load rules) enable you to see the actual files within your newly created snapshot view.

Resources