(Submitting on behalf of a Snowflake User)
In Snowflake UI, I see that pipe_usage_history has a column for credits_used. Does this column include the credits for the event notifications, which are charged at 0.06 credits per 1000 notifications?
Please advise...
(According to Snowflake Sales Engineers)
You are correct in that the column does include the credits for recent notifications. It is advised, though, that if you are seeing any sort of discrepancy, that you should file a support ticket to initiate a possible bug report.
Related
QuickBooks allows users to change posted periods. How can I tell if a user does this?
I actually don't need an audit log, but just the ability to see recently added/edited data that has a transaction date that's over a month in the past.
In a meeting today it was suggested that we may need to refresh data for all our users going back as far as a year on a regular basis. This would be pretty time consuming, and I think unnecessary when the majority of the data isn't changing. But I need to find out how can I see if data (such as an expense) has been added to a prior period so I know when to pull it again.
Is there a way to query for data (in any object or report) based not on the date of the transaction, but based on the date it was entered/edited?
I'm asking this in regard to using the QBO api, however if you know how to find this information from the web portal that may also be helpful.
QuickBooks has a ChangeDataCapture endpoint which is specifically for exactly the purpose you are describing. It's documented here:
https://developer.intuit.com/app/developer/qbo/docs/api/accounting/all-entities/changedatacapture
The TLDR summary is this:
The change data capture (cdc) operation returns a list of objects that have changed since a specified time.
e.g. You can continually query this endpoint, and you'll only get back the data that has actually changed since the last time you hit the endpoint.
I know that DDL and Show operations do not consume compute credits? Is there any list some one has compiled to determine what operations in snowflake do not consume compute credits? Appreciate your help
There is no hard rule that snowflake doesn't charge for DDL and Show operations. They will charge based on the cost of storage and the cost of compute resources consumed.
Storage: storage is per terabyte, compressed, per month and charge for compute is based on the processing units, which we refer to as credits, consumed to run our queries.
please refer following link for more details:
https://www.snowflake.com/pricing/
There are a list of statements that can be run in Snowflake that do not consume compute (virtual warehouse) credit. This list can include:
- DDL statements
- Queries that hit result cache
- result_scan query
- Show commands
- Some count, min, max queries.
Snowflake did make an announcement in November that there will be some changes coming starting February 2020 that will include some new Cloud Services billing in some situations for some of these innovative features that were running free of compute credits. Here's the recently published blog:
https://www.snowflake.com/blog/whats-new-with-the-snowflake-cloud-services-billing-model/
I'm creating a website which has a premium user feature. I'm thinking on how to design the database to store the premium user plan, and how to check it..
My main idea so far is:
Having 2 fields on the user table: premium (boolean) and expires (date)
When user does payment, it will calculate the plan duration, set premium to 1, and the expire date to the end of the duration
Every time I check if user->isPremium(), it will also check if it's expired.. if so, set it back to zero and offer a renewal
Aside from this, all payments /transactions will be recorded in a logs table for record keeping.
This is a simple design I thought, but since this is a common feature on many websites, I thought of asking you guys how do the pros do this?
This probably won't make much difference on the design, but I'll use Stripe for handling payments.
It looks good to me. It is simple and solves your problem.
Hint 1: Depending on the semantics of your premium and expires fields, you do not need both. You can just change your user->isPremium() to check if the expires date has passed. Make sure you also change how you handle offering a renew.
Hint 2: I am working in a system that handles plan subscriptions and I had to deal with the following cases:
Permit users renew/extend the subscription before expiration date.
Different prices for different durations.
Discounts.
The delay between bill generation and payment confirmation.
Users with pending payments trying to buy again.
Users asking to cancel current subscriptions.
Hope it helps.
I am using JDeveloper 11.1.2.3.0
I have implemented af:calenar functionality in my application. My calendar is based in a ViewObject that queries a database table with a big number of records (500-1000). Performing the selection through a select query to my database table is very fast, only some ms. The problem is that the time to load of my af:calendar is too long. It requires more than 5 seconds. If I just want to change the month, or the calendar view I have to wait approximately that amount of time. I searched a lot through the net but I found no explanation to this. Can anyone please explain why it takes so long? Has anyone ever faced this issue?
PS: I have tested even with JDeveloper 12 and the problem is identically the same
You should look into the viewobject tuning properties to see how many records you fetch in a single network access, and do the same check for the executable that populates your calendar.
Also try using the HTTP Analyzer to see what network traffic is going on and the ADF Logger to check what SQL is being sent to the DB.
https://blogs.oracle.com/shay/entry/monitoring_adf_pages_round_trips
I am building a demo for a banking application in App Engine.
I have a Users table and Stocks table.
In order for me to be able to list the "Top Earners" in the application, I save a "Total Amount" field in each User's entry so I will later be able to SELECT it with ORDER BY.
I am running a cron job that runs over the Stocks table and update each user's "Total Amount" in the User's table. The problem is that I often get TIMEOUTS since the Stocks table is pretty big.
Is there anyway to overcome the time limit restriction in App Engine, or is there any workaround for these kind of updates (where you MUST select many entries from a table that result a timeout)?
Joel
The usual way is to split the job into smaller tasks using the task queue.
You have several options, all will involve some form of background processing.
One choice would be to use your cron job to kick off a task which starts as many tasks as needed to summarize your data. Another choice would be to use one of Brett Slatkin's patterns and keep the data updated in (nearly) realtime. Check out his high performance data-pipelines talk for details.
http://code.google.com/events/io/2010/sessions/high-throughput-data-pipelines-appengine.html
You could also check out the mapper api (app engine map reduce) and see if it can do what you need.
http://code.google.com/p/appengine-mapreduce/