Snowflake Client Support Policy - snowflake-cloud-data-platform

According to the website, Snowflake claims that they make no commitments to drivers older than the minimal versions and even suggested it may even block from connecting to Snowflake.
I wonder if anyone can verify that blocking connections actually has been the practice? What would it take to ask for an extension of certain drivers for a particular customer?
Thanks.

The Snowflake docs specify:
Client versions that are below the minimum supported version may be blocked from connecting to Snowflake. Note that Snowflake will provide advanced notification before blocking access for a particular client version.
This means may be blocked. In general using a client version below the minimum supported version will still work, but if you have an issue with the client the first thing Snowflake support will do is ask for an upgrade.

Related

Is one source to many target cluster replication supported by YugabyteDB’s 2-DC async replication mechanism?

For the 2-DC feature described here (https://docs.yugabyte.com/latest/deploy/multi-dc/2dc-deployment/), is one source to multiple target cluster configuration supported?
Yes, this is a supported scenario. Please join our community slack for realtime support if you run into any issues. Also, if you are trying this feature out, would love to get your feedback/experience.

How to handle multiple versions of an application?

I've build an iOS and Android app with a nodejs backend server that is now in production, but as I'm new to this I have no idea on how to handle update of server and apps.
First of all, how am I supposed to update the nodejs server without downtimes?
Second, let's suppose I have a chat on my app and for some reasons I have to change it but the change is not compatible with the previous versions, how am I supposed to act?
I think the question is not entirely clear, but I have no idea on what to search on google to point me in the right direction, anything would be helpfull
Updating server without downtime
The answer really depends upon how your infrastructure is configured.
One way would be to have a second server, configured with the new software, ready to go, and then you switch over to the new server. If you are going to be doing this a lot, then having a mechanism/tooling to do this will certainly simplify things. If things also go wildly wrong, you just switch back.
We use AWS. As part of launching an update, we provision a number of instances to match the current number (so we don't suddenly need several hundred more instances launching). When all the instances are ready to go, our load balancer switches from the current configuration to the new configuration. No one sees anything other than a slight delay as the caches start getting populated.
Handling incompatible data
This is where versioning comes in.
Versioning the API - The API to our application has several versions. Each of them is just a proxy to the latest form. So, when we upgrade the API to a new version, we update the mappers for the supported versions so that the input/output for the client doesn't change, but internally, the main library of code is operating only on the latest code. The mappers massage data between the user and the main libraries.
Versioning the data being messaged - As this is an app, the data coming in should be versioned, so app sending v1 data (or unversioned if you've not got a version in there already) has to be upgraded on the server to the v2 format. From then on, it is v2. On the way out, the v2 result needs to be mapped down to v1. It is important to understand that the mapping may not always be possible. If you've consolidated/split attributes from v1 to v2, you're going to have to work out how the data should look from the v1 and v2 perspectives.
Versioning the data being stored - Different techniques exist depending upon how the data is being stored. If you are using an RDBMS, then migrations and repeatables are very commonly used to upgrade data ready for a new app to operate on it. Things get interesting when you need to upgrade the software to temporarily support both patterns. If not using an RDBMS, a technique I've seen is to upgrade the data on read. So, say you have some sort of document store, when you read the document, check the version. If old, upgrade and save it. Now you can treat it as the latest version. The big advantage here is that there is no long running data migration taking place. Over time, the data is upgraded. A downside is that every read needs to do a version check. So. Maybe mix and match. Introduce the check/upgrade/save happen on every read. Create a data migration tool whose sole job is to trawl through the data. When all the data is migrated, drop the checks (as all the data is either new and therefor matches the latest version or has been migrated to the latest version) and the migrator.
I work in the PHP world and I use Phinx to handle DML (data) migrations and our own repeatables code to handle DDL (schema changes).
updating your backend server is a pain indeed. you can't really do that without downtime at all. what you can do though, assuming your clients access your server with a domain rather than with a plain IP address, is prepare another server with an as-up-to-date data as possible and do a DNS record update to redirect the data to it. keep in mind that DNS has a long update time in which some clients get to the old server and some to the new one (which means a big headache if data consistency is important to you)
changing the API is another pain. often times you need to support older versions of your application in parallel to the newer ones. most app stores will let you know the statistics of your app versions and when it's safe to drop support for an old version.
a common practice though is you have the API endpoints versioned so that version 1 of you app accesses URL/API/v1/... and version to accesses URL/API/v2/... which enables you sending different replies based on your client version. you increase the version every since you make a breaking change to the protocol. this makes a "future compatible" protocol
at some cases you initially add a mechanism that lets the server send a message to an old version of the client saying that their version is obsolete and they need to update...
most big apps already has such mechanism while most small apps just take the risk of some downtime and drop support for a few non-updated clients...

NServiceBus & ServiceInsight Sql Server Transport & Persistence

The application we have been building is starting to solidify in that the majority of the functionality is now in place. This has given us some breathing room and we are starting evaluate our persistence model and the management of it. I guess you could say the big elephant in the room is RavenDB. While we functionally have not experienced any issues with it yet, we are not comfortable with managing it. Simple tasks such as executing a query, truncating a collection, etc, are challenging for us as we are new to the platform and document based NoSql solutions in general. Of course we are capable of learning it, but I think it comes down to confidence, time, and leveraging our existing Sql Server skill sets. For example, we pumped millions of events through the system over the course of a few weeks and the successfully processes message were routed to our Audit queue in MSMQ. We also had ServiceInsight installed and it processed the messages in the Audit queue, which chewed up all the disk space on the server. We did not know how to fix this and literally had to delete the Data file that we found for RavenDB. Let me just say, doing that caused all kinds of headaches.
So with that in mind, I have been charged with evaluating the feasibility and benefits of potentially leveraging Sql Server for the Transport and/or Persistence for our Service Endpoints. In addition, I could use some guidance as well for configuring ServiceControl and ServiceInsight to leverage Sql Server. Any information you might be able to provide regarding configuring these and identifying any draw backs or architectural issues that we should consider would be greatly appreciated.
Thank you, Jeffrey
Using SQL persistence requires very little configuration (implementation detail), however, using SQL transport is more of an architectural decision then an infrastructure one as you are changing to a broker style architecture, that has implications you need to consider before going down that route.
ServiceControl and ServiceInsight persistance:
Although the ServiceControl monitors MSMQ as the default transport, you can use ServiceControl to support other transports such as RabbitMQ, SqlServer as well, Here you can find the details of how to do that
At the moment ServiceControl relies on RavenDb for it's persistence and it is not possible to change that to SQL as ServiceControl relies on Raven features.(AFIK)
There is an open issue for expiring data in ServiceControl's data, see this issue in github
HTH
Regarding ServiceControl usage of RavenDB (this is the underlying service that serves the data to ServiceInsight UI):
As Sean Farmar mentioned (above), in the post-beta releases we will be including message expiration, and on-demand audited message deletion commands so that you can have full control of the capacity utilization of SC.
You can also change the drive/path of the ServiceControl database location to allow it to use a larger drive.
Note that ServiceControl (and ServiceInsight / ServicePulse that use it) is intended for analysis, debugging and operational monitoring. Its intended to store a limited amount of audited data (based on your throughput and capacity needs, this may vary significantly when counted as number of messages, but the database storage capacity can be up to 16TB).
If you need a long term storage for audited data, you can hook into ServiceControl HTTP API and transfer the messages' data into various long-term / unlimited-size / low-cost storage solutions (e.g. http://aws.amazon.com/glacier).
Please let us know if this answers your needs and whether you have additional questions
Danny.

On database communication security

So, I've been reading about security in relation to desktop applications and database servers. Previously when I've built applications that are linked to a database, I've taken the easy route and simply stored the connection string hard coded in the source code directly. This has worked since the binaries were not distributed to third parties. However, now I'm working on a project whose binaries are bound for third party use, and in this case the communication with the server becomes a security issue that I need to deal with.
Since it is a priority that there be no direct connection to the remote database from the client machine, I understand that a server/client database service is a good choice. In this case, the client machine sends requests using TCP to a server, which then processes the request using stored procedures and responds accordingly to the client.
My questions in relation to this are:
i. Would this setup be an advisable one, or are other setups of which I am unaware more advisable for the kind of project I am working on?
ii. How does one go about securing such a connection? I can easily set up an SSL connection to the server using a security certificate generated by OpenSSL, however I'm not sure whether this is the correct way of securing the connection for a desktop application, or whether this method is primarily used for HTTPS. And WHEN should one in general secure the connection (are there instances where this wouldn't matter, for instance if all I do is send booleans back and forth?)? Any good resources that discuss these issues? For instance, I have a lot of application installed on my Windows PC that are networked, but I don't see many of them installing a security certificate on my PC. What gives?
Full disclosure: I'm a C++ (hobbyist) programmer using Boost libraries for my network programming needs and OpenSSL for my SSL cryptography. However, I hope this can be answered without paying too much attention to these facts :)
Answers:
i. Having your application talk to a web service that then talks to the database is a better setup. This abstracts the database away from the clients (and therefore direct access from the internet).
ii. This depends on what the threats to your system are. If the data you are vending from the web service mentioned above is data that is not sensitive, and is not user specific (say an app that allows searching of public photo galleries, so your web service simply returns a result set with URLs) then you might be able to get by with simply using SSL. Other apps get around installing their own cert in a myriad of ways. They can either get a cert from a CA like verisign, so your computer already trusts it. Or they can deploy the public cert with the binary of their app, and handle it inside of their app (this is a form of certificate pinning).
ii part 2. If you need the clients to authenticate, for reasons of either wanting to make sure that not just anyone can use your web service, or to support a more advanced authorization model, then you would need to implement some sort of authentication. That would be a much bigger question to address.
Make sure you use CA-signed certificates, and not self-signed. You might also want to consider mutual authentication between your service and the database.

Can I use a database in the browser?

My application needs to use a database. I've read online that databases are supported in some browsers but they are now deprecated? It was very confusing. I need to use a database with HTML. Is it possible to use a database with HTML5?
UPDATE
It needs a database to store data when the user is offline. It needs to support the major browsers from IE 9 on up if possible on Mac and Win on desktop and possibly Linux. The application is a client side HTML editor. There is no server side yet. So I need to persistently store "files" on the client side. Also, when I do have a server side I still need to save the users work when they have no internet connection. Later, when they go online the application can sync up with the server side. I could make do with Local storage but it is too limited in space at 5MB. That's the maximum limit? The application allows you to use images and those will be saved to base 64 data URI resources so you will quickly use up that space with just a few projects.
A few people asked why go for a database on the client. Because it's appropriate for this situation and if browsers support it then I would like to use it.
You should not use a database in the browser, even if you could, because that probably is not standard conforming. However perhaps consider HTML5 local storage
What you can is develop some web service accessing a database; that access happens in the HTTP server (so the database is used in the server, not in the browser, which just indirectly gives access to it).
I would suggest to develop your web application in Opa but you could even use PHP for that. And you could make it a CGI or a FastCGI application.
BTW; you did not tell us which operating system you are using.
You could consider making your application some specialized HTTP server, e.g. using Wt or Ocsigen -or using some HTTP server library like libonion etc- and then it could access some database (and your application could run on e.g. localhost:8086 if you wanted to).
PS. I believe that having some web pages storing a large amount of data on the client side (in the browser) is simply not reasonable (and against the standard and the spirit of the web). You should store such large data on the server side (or have some URL to download it from the server). You could decide to make a "server" application running on localhost (of course, this is operating system specific).
PPS: If coding an HTML editor, I am not sure that standard browsers are the adequate tool. You may however use the contenteditable attribute.
Check in CanIUse website.
http://caniuse.com/#feat=sql-storage
It's not DEPRECATED, but it's not supported by all browsers so far. The reason is that it did not become a valid standard because all the approaches are based in SQLite, and this is way too limited to be considered a standard.
yes go for indexedDB. Its still in development phase but you can use it
http://www.html5rocks.com/en/tutorials/indexeddb/todo/
I've read online that databases are supported in some browsers but they are now deprecated?
You're right: The Web SQL Database spec has been abandoned. There are a few browsers that still support it, but not many, and it will likely disappear entirely in the very near future.
As an alternative, a newer spec called IndexedDB is now being promoted by the browser vendors. The down-side is that it is still relatively new, and thus you will probably have many users with have browsers that do not support it.
Depending on the size of the data you need to store, you might consider Web Storage instead. This is intended just for plain text, and can't store massive quantites of data, but it is a well-established standard, so you've got a much better chance of compatibility.
If you can get away with storing a relatively small data set (ie a few megabytes or less) then this is definitely the best option here. This would certainly be my advice; try to keep your offline data requirements as small as possible, and keep them in Web Storage.
Hope that helps.
For a detailed run-down of all available browser-side storage options, you should read this article. It gives a much more detailed description of each of the options I described above, plus a few others.

Resources