How to re-use token using snowflake node connector? - snowflake-cloud-data-platform

I have been asked to report an issue with connecting to Snowflake using the node connector here.
Issue: https://github.com/snowflakedb/snowflake-connector-nodejs/issues/113
The issue is I can't find any documentation on how to re-use an existing token to avoid taking a long time when connecting to Snowflake.
Would appreciate any help.
EDIT
Here is the code I use:
// Tokens are retrieved from a DB
if (tokens) {
connection.masterToken = tokens.masterToken;
connection.masterTokenExpirationTime = tokens.masterTokenExpirationTime;
connection.sessionToken = tokens.sessionToken;
connection.sessionTokenExpirationTime = tokens.sessionTokenExpirationTime;
}
connection.connect(async function (err, conn) {
if (err) {
reject(err);
} else {
resolve();
}
});

This might not be a full answer, but hopefully it helps you or someone else. I've had similar issues. For us the process is to get a JWT token via a web service. I haven't tested this, but suspect this could be re-used.The JSON response includes a "lease_duration" property. I'm guessing this is in seconds, but do not know though I tried to check. To give you an idea, one value I got for this is 2764800. You could calculate the do something like:
Long leaseDurationInMs = Long.parseLong(result.get("lease_duration"));
Date estimatedLeaseExpiration = new Date(leaseStartTime+leaseDurationInMs);
System.out.println("Estimated lease expiration timestamp (human readable): "+estimatedLeaseExpiration);
Long estimatedLeaseExpirationInMs = estimatedLeaseExpiration.getTime();
and if then check this value each time you would have fetched whatever this token thing is to see if you need to get another one.

Sorry for answering my own question but I ended up caching the data on my side to avoid connecting too often.

Related

Identityserver4 and Redis cache not thread safe?

We use IdentityServer4 to protect our ASPnetCore API's on Azure. This afternoon we were challenged with a very strange occurrence.
One of our API's simply returns all items from a database table based on the sub claim of our user (e.g. userid). Today two users reported seeing not their own items. This code has been running flawless for some years now.
Our startup.cs contains the following:
services.AddStackExchangeRedisCache(action =>
{
action.Configuration = Configuration["RedisConnectionString"];
});
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddIdentityServerAuthentication(options =>
{
options.Authority = Configuration["identityServerUrl"];
options.ApiName = "<<redacted>>";
options.ApiSecret = "<<redacted>>";
options.EnableCaching = true;
options.CacheDuration = TimeSpan.FromMinutes(5);
});
Because of timeout issues on Redis, we added the following line:
ThreadPool.SetMinThreads(300, 300);
Could this have anything to do with userid's being swapped?
As it is a protected API, all requests have a bearer token which is validated in Redis with the above logic. How could the userid suddenly be different?
Thanks for thinking with me!
It seems the call of seeing all wrong data was misinterpreted and the logic mentioned above still works as expected. The userid is correct.
We still have some other issue to work out though, but that is not related to this question.

firebase 3.0 on query

I am building a project where i want to extract a list from a query using firebase 3.0. I am quite new to this but I imagine that there is a simple answer to my question.
I have this structure :
requests : {
luke1 : {
1 : {
.../...
users : {
0 : {
username : joseph,
answered : 0
}
1 : {
username : mark,
answered : 1
}
}
}
}
}
Basically the logged in user (luke1) sends a request to a number of users (joseph and mark) and lets say i'm logged in as user: joseph.
I want to have get a list of requests sent to joseph which where not replied yet
var ref = firebase.database().ref("requests/");
i want to know how can i write the query.
Thanks for taking time to read this and if you need more information from my end, please let me know.
When using Firebase (and in most NoSQL databases), you will often find that you end up modeling the data for the way your app wants to consume it.
So with you current data model, you can easily get the requests sent by a specific user.
ref.child("requests/luke1").on("child_added", ...
But you cannot yet easily find the requests sent to a specific user. To allow querying for that data easily, you could add an inverted data structure to your database:
received: {
joseph: {
0: {
from: luke1
answered: 0
}
}
}
Now you can easily get joseph's unanswered requests with:
ref.child("received/joseph").orderByChild("answered").equalTo(0).on("child_added", ...
Your initial response is likely that this sort of data duplication is bad. But it's actually quite common in NoSQL databases.
There are many more ways to model this structure. For a great introduction to the topic, I recommend this article on NoSQL data modeling.
To achieve this kind of query you need to store in a variable the currentID of your user. After do this, just try something like the next query:
var ref = firebase.database().ref("request").child(currentUserId).child("users");
If I'm not wrong, it will return you the query that you want.

gcloud check if a topic exist and ability to reuse the topic

I'm using gcloud-node.
createTopic api returns error 409, if that topic exist already. Is there a flag that can implicitly create a topic when publishing a message or Is there an API to check if a topic exist already?
Its easy to use getTopics API, iterate thru the response topic array and determine if a topic exist. Just wanted to make sure I dont write something, if it exists already.
Is there a flag that can implicitly create a topic when publishing a message or Is there an API to check if a topic exist already?
I believe the problem you'll run into is that if a message is published to a topic that doesn't exist, it is immediately dropped. So, it won't hang around and wait for a subscription to be created; it'll just disappear.
However, gcloud-node does have methods that will create a topic if necessary:
var topic = pubsub.topic('topic-that-maybe-exists');
topic.get({ autoCreate: true }, function(err, topic) {
// topic.publish(...
});
In fact, almost all gcloud-node objects have the get method that will work the same way as above, i.e. a Pub/Sub subscription or a Storage bucket or a BigQuery dataset, etc.
Here's a link to the topic.get() method in the docs: https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.37.0/pubsub/topic?method=get
ran into this recently, and the accepted answer runs you into http 429 errors. topic.get is an administrative function which has a significantly lower rate limit than normal functions. You should only call them when neccessary eg. error code 404 during publish (topic doesn't exist), something like so:
topic.publish(payload, (err) => {
if(err && err.code === 404){
topic.get({ autoCreate: true }, (err, topic) => {
topic.publish(payload)
});
}
});
Personally use this one
const topic = pubsub.topic('topic-that-maybe-exists');
const [exists] = await topic.exists();
if (!exists) {
await topic.create();
}

SymmetricDS sync based on last updated time

I have 2+ clients + 1 server and I'm able to keep all data synced between clients and server as long as they're all connected. but problem is when a client (laptop) is not online and gets online after a while, in this situation I need to make sure only latest data is synced across databases but now what happens is last connected client's data gets synced to other clients/server even if it's not latest changes and there are newer changes on server/other clients.
I appreciate if you can help me solve this.
Finally I find the answer
I added a load filter record and used following bsh script to filter_on_update column to avoid changes with older modified date value
import java.text.SimpleDateFormat;
SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.S", Locale.ENGLISH);
if (format.parse(MODIFIED).after(format.parse(OLD_MODIFIED))) {
return true;
} else {
return false;
}

Ldap query only returning 1000 users... yes I am using paging

I have a simple GetStaff function that should retrieve all users from active directory. We have over a 1000 users so the directory searcher is using paging because the default for the AD MaxPageSize is 1000.
Currently the search works 'sometimes' when I build and sends back all 1054 users, and other times it only sends back 1000. If it works once, it works all the time. If it fails once, it fails all the time. I have set everything in using statements to make sure the objects are destroyed, but it still doesn't always seem to respect the PageSize attribute. By default if the PageSize attribute is set, the searcher should use a SizeLimit of 0. I have tried leaving the size limit out, setting it to 0, and setting it to 100000 and the unstable result is the same. I have also tried lowering the PageSize to 250 and get the same unstable results. Currently I am trying changing the ldap policy on the server to have a MaxPageSize of 10000 and I am still receiving 1000 users with the search PageSize to 10000 also. Not sure what I am missing here, but any help or direction would be appreciated.
public IEnumerable<StaffInfo> GetStaff(string userId)
{
try
{
var userList = new List<StaffInfo>();
using (var directoryEntry = new DirectoryEntry("LDAP://" + _adPath + _adContainer, _quarcAdminUserName, _quarcAdminPassword))
{
using (var de = new DirectorySearcher(directoryEntry)
{
Filter = GetDirectorySearcherFilter(LdapFilterOptions.AllUsers),
PageSize = 1000,
SizeLimit = 0
})
{
foreach (SearchResult sr in de.FindAll())
{
try
{
var userObj = sr.GetDirectoryEntry();
var staffInfo = new StaffInfo(userObj);
userList.Add(staffInfo);
}
catch (Exception ex)
{
Log.Error("AD Search result loop Error", ex);
}
}
}
}
return userList;
}
catch (Exception ex)
{
Log.Error("AD get staff try Error", ex);
return Enumerable.Empty<StaffInfo>();
}
}
A friend got back to me with the below response that helped me out, so I thought I would share it and hope it helps anyone else with the same issue.
The first thing I think of is "Are you using the domain name, e.g. foo.com as the _adpath?"
If so, then I have a pretty good idea. A dns query for Foo.com will return a random list of all of up to 25 DCs in the domain. If the first DC in that random list is not responsive or firewalled off and you get that DC from DNS then you will experience the behavior you describe. Since the DNS is cached on the local machine, you will see it happen consistently one day, then not do it the next. That's infuriating behavior. :/
You can verify this with a network trace to see if this is happening.
So how do you workaround it? A couple of options.
Query DNS -> create a lists of hosts returned -> Try the first one. If it fails, Try the next one. If you hit the bottom of the list, Fail. If you do this, log each independent failure noisily so the admins don't blame you.
Even better would be to ask the AD administrators for a list of ldap servers and use that with the approach described above.
80% of administrators will tell you just to use the domain name. This is good because that deploying a new domain will "just work" with no reconfiguration required.
15% of administrators will want to specify a couple of DCs that are network closest to the application. This is good for performance, but bad if they forget about this application when the time comes for them to upgrade their domain.
The other 5% doesn't really matter. :)
The next point that I see is that you are using LDAP, not LDAPs. That is fine, but there is a risk that you will use "Basic" binds. With "Basic" binds, joe hacker can steal your account credentials using a network sniffer. There are a couple of possible workarounds.
1. There is another DirectoryEntry constructor that will let you specify "Secure" as the auth method.
2. Ask your admins if you can use LdapS. (more portable, in case you need to talk to an LDAP server other than Active Directory)
The last piece is regarding Page Size. 1,000 should be fine universally. Don't use any value > 5,000 or you can expect some fidgety behaviors. i.e. This is higher than the default limit under Windows 2003, and in Windows 2008 the pagesize is hardcoded limited to 5,000 unless it's been overridden using a rather obscure bit in AD called dsHeuristics. http://support.microsoft.com/kb/2009267
LDAP is configured, by default, to only return a maximum of 1000. You can change this setting on the domain your requesting from.

Resources