From "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE", I'd like to see the TIME_TRAVEL_BYTES broken out from 'STORAGE_BYTES' over time, by table and/or database. As it exists today, TIME_TRAVEL_BYTES and ACTIVE_BYTES are bundled up and combined into the existing field, 'STORAGE_BYTES'.
I've tried using "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" to access this information, but it seems that this can only show me the TIME_TRAVEL_BYTES at the exact moment of querying.
Has anyone had success finding a way to breakout this usage data? If so, please let me know how it's done!
Related
Good day everyone, this is my first time posting here but I'd like some help with a recent issue.
So, I'm working on a small React app just for fun and to keep practising, in it I made a few constants for different datasets (each with varying data fields), that is, I have various kinds of records categorized in said constants, since some records use 2+ rows and some need only one row.
Initially I was going to share the app's code, but the datasets are a tad... large, so reading the tips I thought I'd instead create an online sandbox to illustrate with a much simpler and smaller scenario of what I managed to do: [link to the sandbox].
However, looking around and trying different stuff I found out about react-table, which is what I needed due to its useful features and how lightweight it is. I mainly need it for filtering records but I want to try some other features as well.
All this brings me to my problem: I want to populate a table in react-table with the multiple datasets together and their own ways their data is placed on the JSX code, however, I can't figure out what to do and my app's code is getting messy in the process, so I thought I'd ask here to see what I can do, using the code in the sandbox as base, then I can edit my app accordingly if there's a solution for this, otherwise I guess I can maybe make one table for each dataset or just use good ol' HTML+JS+CSS? But neither are the results I'm aiming for.
I'm in no rush for answers since this is just a project for fun and to practise, however, any help is appreciated, thanks in advance.
Azure form-recognizer, prebuilt-invoice doesnt recognize currency and some of my other custom fields from my invoice pdf. General Document gets me all key values. But for General document keyvalues I need to write algorithm to categorize the invoice related fields, which are already done in prebuilt-invoice.
I need all keyvalues from prebiult-invoice api, so I can find the missing elements by myself.
Anybody faced this? how do you overcome? one way I think of is, we can call both apis for same document. But it affects performance and increases cost.
any suggestion?
I wish to create a generic component which can save the Object Name and field Names with old and new values in a BigObject.
The brute force algo says, on every update of each object, get field API names using describe and check old and new value of those fields. If it gets modified insert it into new BigObject.
But it will consume a lot of CPU time and I am looking for an optimum solution to handle this.
Any suggestions are appreciated.
Well, do you have any code written already? Maybe benchmark it and then see what you can optimise instead of overdesigning it from the start... Keep it simple, write test harness and then try to optimise (without breaking unit tests).
Couple random ideas:
You'd be doing that in a trigger? So your "describe" could happen only once. You don't need to describe every single field, you need only one operation outside of trigger's main loop.
Set<String> fieldNames = Account.sObjectType.getDescribe().fields.getMap().keyset();
System.debug(fieldNames);
This will get you "only" field names but that's enough. You don't care whether they're picklists or dates or what. Use that with generic sObject.get('fieldNameHere') and it's a good start.
or maybe without describe at all. sObject's getPopulatedFieldsAsMap() will give you cool Map which you can easily iterate & compare.
or JSON.serialize the old & new version of the object and if they aren't identical - you know what to do. No idea if they'll always serialise with same field order though so checking if the maps are identical might be better
do you really need to hand-craft this field history tracking like that? You have 1M records free storage but it could explode really easily in busier SF org. Especially if you have workflows, processes, other triggers that would translate to multiple updates (= multiple trigger runs) in same transaction. Perhaps normal field history tracking + chatter feed tracking + even salesforce shield (it comes with 60 more fields tracked I think) would be more sensible for your business needs.
I have an application which has several unrelated tables in its db. I'll explain by using an "auto-updating" version of the SO homepage as an example, so lets say I have the tables "users", "comments" and "questions".
The homepage client side needs to periodically poll the server, and get a log of all the new "events" that have happened. I.e., I'd like to display (somehow) the new questions, comments and users that have been added to SO since the last poll.
On way would be to simply keep a variable on the client side containing the last index of each of my tables, send it to the server, and have the server send me the new users, comments and questions.
The problem is, what happens when I add a new type of information, say, votes. Now I have to store another variable on the client-side, and the server has to poll another table. And so on, for every new type of information I keep.
I'm looking for a solution that helps me avoid this.
Another problem - say I'd like to see all the "events" that have happened since last time, but sorted according to when they took place.
One direction I had is to have a single "events" table, which contains the info about when each event happened. I can then poll only this table, and get a list of all the new events that have happened. The problem is that each event is pretty different (a new comment has different columns than a new upvote, etc.) So I'm not sure how to implement this, or if this is even a good idea.
Does anybody have any ideas how I can solve this? This seems like something that would come up a lot, but I don't really have much experience with databases, unfortunately.
Thanks!
It sounds to me like you're trying to future proof via database design. While this can be done through something an EVA model I caution against that because the value its adds tend to not be worth the cost.
Instead you should model the database as closely to reality as possible and not how you intend to use it.
Then use SQL to project the data to how you need it. You can do this by statements that will either deliver the meta data that you need
e.g.
Select
Count(ID) , 'Comments' Type
From
Comments
Where
lastUpdate > #InputParamter1#
UNION Select
Count(ID) , 'Questions' Type
From
Questions
Where
lastUpdate > #InputParamter1#
Or (and this doesn't get used Often enough)
Return more than one result set from your database in one go
Select
userid,
ComentText
From
Comments
Where
lastUpdate > #InputParamter1#;
Select
userId,
Questions,
Tags
From
Questions
Where
lastUpdate > #InputParamter1#
That said you will still have to write some code if you add new stuff but it should be limited to updating your sql, adding new containers for your data and then code to display to the end users and then to validate and store it.
Honestly the idea of adding new stuff requiring some work doesn't seem that awful to me.
I have a location auto-complete field which has auto complete for all countries, cities, neighborhoods, villages, zip codes. This is part of a location tracking feature I am building for my website. So you can imagine this list will be in the multi-millions of rows. Expecting over 20 million atleast with all the villages and potal codes. To make the auto-complete work well I will use memcached so we dont hit the database always to get this list. It will be used a lot as this is the primary feature on the site. But the question is:
Is only 1 instace of the list stored in memcached irrespective of the users pulling the info or does it need to maintain a separate instance for each? So if say 20 million people are using it at the same time, will that differ from just 1 person using the location auto-complete? I am open to other ideas also on how to implement this location auto complete so it performs well.
Or can i do something like this: When a user logs in in the background I send them the list anyways, so by the time they reach the auto complete textfield their computer will have it ready to load instant?
Take a look at Solr (or Lucene itself), using NGram (or EdgeNGram) tokenizers you can get good autocomplete performance on massive datasets.