Is is possible to set options in a SnowSQL config file? - snowflake-cloud-data-platform

I know I can do:
snowsql --connection my_connection --option friendly=false
But I'd like to do:
[connections.my_connection]
accountname = aa12345.us-central1.gcp
username = my_username
password = my_password
warehouse = my_warehouse
role = my_role
option = friendly=false
The above yields:
Error parsing /home/me/.snowsql/config Recovering partially parsed config values: cause: Parsing failed with several errors.

Yes, in your home directory find .snowsql sub-directory, inside is a config file you can set set exactly that!
https://docs.snowflake.com/en/user-guide/snowsql-config.html#snowsql-config-file

Related

backend error when put file with python connector

with the connector python, the sql query work fine. However, when i use an instruction (put file file:///localfile), i have an error:
TypeError: init() missing 1 required positional argument: 'backend'
With snowSql on the same Server, it's work.
the code used:
import snowflake.connector
ctx = snowflake.connector.connect(
user='lincavo',
account='*****',
password='*******',
database='DEV_POC_VELOS_DB',
schema='DATALAB',
role='dev_data_analyst'
)
cur = ctx.cursor()
FILE_NAME = "/home/lincoln/DEV/snowflake/100003097-SC.json"
sql="PUT file:///home/lincoln/DEV/snowflake/100003097-SC.json #local_velos_json auto_compress=false"
cur.execute(sql)
snowflake-connector-python 2.6.2
can you help me please ?
thanks

Jupyterhub AD Integration Error: No entry found for user when looking up attribute ‘sAMAccountName’

I am facing issue in configuring ldap authentication with AD integration and I get error while logging in. Below is my config file.
c.JupyterHub.authenticator_class = 'ldapauthenticator.LDAPAuthenticator'
c.LDAPAuthenticator.server_address = '<ip>'
c.LDAPAuthenticator.server_port = 389
c.LDAPAuthenticator.use_ssl = False
c.LDAPAuthenticator.allowed_groups = []
#set up the template which used to bind the user from ldap directory
#c.LDAPAuthenticator.bind_dn_template = ["uid={username},ou=Admin,ou=Groups,dc=example,dc=com"]
#Active Directory Integration
c.LDAPAuthenticator.lookup_dn = True
c.LDAPAuthenticator.lookup_dn_search_filter = '({login_attr}={login})'
c.LDAPAuthenticator.lookup_dn_search_user = 'user#example.com'
c.LDAPAuthenticator.lookup_dn_search_password = 'redact'
c.LDAPAuthenticator.user_search_base = 'ou=Admin,ou=Groups,dc=example,dc=com'
c.LDAPAuthenticator.user_attribute = 'sAMAccountName'
c.LDAPAuthenticator.lookup_dn_user_dn_attribute = 'cn'
c.LDAPAuthenticator.escape_userdn = False
c.LDAPAuthenticator.bind_dn_template = '{username}'
While logging to jupyterhub with my user I get below error.
[W 2021-03-18 05:02:18.675 JupyterHub ldapauthenticator:275] No entry found for user 'testuser' when looking up attribute 'sAMAccountName'
[W 2021-03-18 05:02:18.675 JupyterHub base:713] Failed login for testuser
versions
python==3.8.5
jupyterhub==1.1.0
jupyterhub-ldapauthenticator==1.3.2
Tried different combinations in configuring the parameters but the error remains the same.
c.LDAPAuthenticator.lookup_dn_search_user is the ldap account that I have mentioned. Is it the correct way?
Any suggestions please.

Knife search environments and roles with a field using a wildcard

I'm changing an attribute from:
default['splunk']['auth'] = "admin:changeme"
to:
default['splunk']['auth']['username'] = "admin"
default['splunk']['auth']['password'] = "changeme"
and I want to be sure that I don't miss / forget something...
It's easy enough to be sure I didn't miss anything in cookbooks:
nickh#BONHENRY:~/Repositories/chef$ ack-grep "\[.splunk.?\]\[.auth.?\]"
cookbooks/splunk_cookbook/attributes/default.rb
36:default['splunk']['auth']['username'] = "admin"
37:default['splunk']['auth']['password'] = "changeme"
cookbooks/splunk_cookbook/attributes/README.md
72:* `node['splunk']['auth']['username']` - The default admin username to use instead of splunks "admin"
73:* `node['splunk']['auth']['password']` - The default admin password to use instead of splunks "changeme"
cookbooks/splunk_cookbook/recipes/server.rb
219: command "#{splunk_cmd} edit user admin -password #{node['splunk']['auth']['password']} -roles admin -auth admin:changeme && echo true > /opt/splunk_setup_passwd"
228: command "#{splunk_cmd} enable listen #{node['splunk']['receiver_port']} -auth #{node['splunk']['auth']['username']}:#{node['splunk']['auth']['password']}"
326: command "#{splunk_cmd} edit licenser-localslave -master_uri 'https://#{node['splunk']['dedicated_search_master']}:8089' -auth #{node['splunk']['auth']['username']}:#{node['splunk']['auth']['passwor
391: command "/opt/splunk/bin/splunk login -auth #{node['splunk']['auth']['username']}:#{node['splunk']['auth']['password']}"
cookbooks/splunk_cookbook/recipes/forwarder.rb
78:execute "#{splunk_cmd} edit user admin -password #{node['splunk']['auth']['password']} -roles admin -auth admin:changeme && echo true > /opt/splunk_setup_passwd" do
However, I'm not sure of a good way to grep/search JSON roles/environments for the same...
This works:
nickh#BONHENRY:~/Repositories/chef$ knife search environment "override_attributes_splunk_auth:*" -i
1 items found
prod-acme
However, do you have any idea why this works:
nickh#BONHENRY:~/Repositories/chef$ knife search environment "*:*" -i
108 items found
but this doesn't? :
nickh#BONHENRY:~/Repositories/chef$ knife search environment "*splunk_auth:*" -i
ERROR: knife search failed: invalid index name or query
Is there an easy / better way to do it that I'm not thinking of / unaware of? Can I change something w/ my Solr query that would make it work?
Thanks in advance :-)
Because *:* is a special case in the search system that bypasses Solr entirely. Chef's search index uses a transformed version of the query such that I don't think it would be safe to use a glob in the facet name like that. I would recommend running knife download environments/ (and similar for roles) and then doing your grep on the local JSON files.

RM + DSC to node in untrusted domain

So I mention the untrusted domain aspect because I went through all the hoops around credential delegation and trusted hosts lists etc to allow me to successfully push a DSC configuration from my RM server to a target node (not using RM, just native DSC). I get that bit and it works, great.
Now when I use those same scripts in RM (with some minor edits for the format expected by RM), RM reports a successful deploy but all that has happened is the components bits have been copied to the target node to the default location for $applicationPathRoot (C:\Windows\DtlDownloads), there is no real evidence of an attempt to apply a mof file.
My RM server and target nodes are in different domains with no trust. Both servers are W2k8R2 (+ WMF4 of course). I'm running with Update 4 of RM server and client.
Here are the DSC scripts I'm running in RM:
CopyDSCResources.ps1
Configuration CopyDSCResource
{
param (
[Parameter(Mandatory=$false)]
[ValidateNotNullOrEmpty()]
[String] $ModulePath = "$env:ProgramFiles\WindowsPowershell\Modules")
#[PSCredential] $credential = get-credential
Node VCTSCFDSMWEB01
{
File DeployWebDeployResource
{
Ensure = "Present"
SourcePath = "C:\test.txt"
DestinationPath = "D:\temp"
Force = $true
Type = "File"
}
}
}
CopyDSCResource -ConfigurationData $configData -Verbose
# test outside of RM
#CopyDSCResource -ConfigurationData CopyDSCResource.ConfigData.psd1
#Start-DscConfiguration -Path .\CopyDSCResource -Credential $credential -Verbose -Wait
CopyDSCResource.ConfigData.psd1
##{
$configData = #{
AllNodes = #(
#{
NodeName = "*"
PSDscAllowPlainTextPassword = $true
},
#{
NodeName = "VCTSCFDSWEB01.rlg.test"
Role = "WebServer"
}
)
}
I'm afraid I cant seem to upload screenshots from my current location but in terms of RM, I have a vNext environment with a single server linked, a vNext release path with a single 'Dev' stage and a vNext release template with a single 'Deploy PS/DSC' action. The configuration of the action is:
ServerName - VCTSCFDSMWEB01
ComponentName - COpyDSCResource vNext
PSScriptPath - copydscresources.ps1
PSConfigurationPath - copydscresource.configdata.psd1
UseCredSSP - true
When I run a new release, the deploy stage reports success and when I view the Deployment log files I get the following:
Upload components - Successfully uploaded to the normalized store.
Deploy Using PS/DSC - Copying recursively from \vcxxxxtfs03\Drops\CorrespondenceCI\CorrespondenceCI20150114.1\Scripts to C:\Windows\DtlDownloads\CopyDSCResource vNext succeeded.
Finally the DSC event log has the following:
Job {CD3BE350-4072-4C8B-835F-4B4D1C46D65D} :
Configuration is sent from computer NULL by user sid S-1-5-18.
This compares markedly to the same event log entry when run outside of RM:
Job {34F78498-CF18-4F2A-9874-EB54FDA2D990} :
Configuration is sent from computer VCXXXXTFS01 by user sid S-1-5-21-1034805355-1149422947-1317505720-10867.
Any pointers appreciated
It would be good if I could see evidence of a mof file being created on the RM server for example, anybody know where I can find this??
Turns out the crucial element was that my DSC script had to use an environment variable for naming the node. So:
Node $env:COMPUTERNAME
No idea why but it works!

How to Save My User Account and Password in Google App Engine Launcher?

I'm using the Google App Engine Launcher to deploy my app to the GAE servers. Is there a way to save my user account and password so I don't have to type it in every time I redeploy?
I'm still in the learning stages of using GAE so typing my 16 odd character password gets tiresome when I redeploy 15+ times per evening.
You can make a .bat file that has the following text:
echo <password> | c:\python25\python.exe "C:\Program Files\Google\google_appengine\appcfg.py" --email=username --passin update <directory of app on your pc>
(According to GAE docs you cannot specify the password as a command line option)
Use oauth to save an OAuth2 token so you don't need to keep re-typing your password.
The accepted solution didn't work for me. Using pipes did
echo <password> | c:\python25\python.exe "C:\Program Files\Google\google_appengine\appcfg.py" --email=username --passin update <directory of app on your pc>
appcfg already does this for you. Per the docs:
appcfg.py gets the application ID from
the app.yaml file, and prompts you for
the email address and password of your
Google account. After successfully
signing in with your account,
appcfg.py stores a "cookie" so that it
does not need to prompt for a password
on subsequent attempts.
If this isn't occurring for you, you might want to try deleting any .appcfg* config files.
Other tips & trick: using command line as below:
To get appcfg.py to accept --password on the command line instead of being prompted for it:
Change: *appengine/google_appengine/google/appengine/tools/appcfg.py*
add the following in the parser.add_option section:
parser.add_option("-p","--password", action="store", dest="password",
metavar="PASSWORD", default=None,
help="The password")
Then modify the GetUserCredentials function:
def GetUserCredentials():
"""Prompts the user for a username and password."""
email = self.options.email
if email is None:
email = self.raw_input_fn("Email: ")
password = self.options.password
if password is None:
password = self.raw_input_fn("Password: ")
# password_prompt = "Password for %s: " % email
# if self.options.passin:
# password = self.raw_input_fn(password_prompt)
# else:
# password = self.password_input_fn(password_prompt)
return (email, password)
That's it, now you can call:
appcfg.py update demos/guestbook --email=email#gmail.com --password=xxxx
Ref: http://samalolo.blogspot.com/2009/04/appcfgpy-tweak-to-allow-passing.html
I just wanted to say thank you to Friar Broccoli, it's exactly what I was looking for. To clarify for other beginners like myself, my final batch file ended up looking like the following,
c:\python27\pythonw.exe "C:\Program Files (x86)\Google\google_appengine\appcfg.py" --oauth2 update "C:\Users\[username]\[directory]\app.yaml"
Worked perfectly, wish this solution was higher up.
For windows 7, .appcfg_cookies under C:\Users\username\.appcfg_cookies
You could write a command line script that executes appcfg.py to do this.
You can specify the email to use with the --email= command line parameter.
You can pass in the password from stdin by using the --passin parameter.
It's amazingly simple. Just put this in a batch file:
appcfg.py --oauth2 update "X:\local\path\to\your\app.yaml\file"
The first time you run it google will authenticate, after that it's all automatic.

Resources