I have an object stored in datastore. Is there a maximum character length for the objects key.urlsafe() ? I cannot seem to find the answer in the docs anywhere.
The length varies according to the key components and the length of the key. It is a base-64 encoding of each (Model, Id) pair in the ancestor list. The maximum character length is roughly the total length of the model name(s) and id(s) converted to base-64.
You can look for def urlsafe in the Source code for google.appengine.ext.ndb.key
Related
I faced with issue that sometimes when I created custom formula-fields on Salesforce objects I couldn't save it, the reason of it was limitation in 5000 characters for such type of field.
The main trouble is that when I copied content of formula to any notepad which can calculate number of characters I saw, that there was less than 5000 characters. After some investigations I found that mentions of other formula-fields and also some methods, like TODAY(), can implicitly increase number of characters. In this way the real length will be more than number of character that you type.
My question - how can I see the real amount of formula-field characters and how to know which parts of formula adds extra-amount?
The code in a Formula Field can exceed the maximum number of characters allowed in two ways:
Directly in the Formula Field's characters (3900). (I think your case)
In the overall size of the Formula after other included Formula
Fields are factored in (5000) bytes.
You can refer this to find workaround Formula Field exceeds maximum number of characters
Hope this Helps !!
Thanks,
Swayam
The input: An array of strings, and a single string.
The task: Find all entries in the array where any substring of the entry matches the input string.
The input array can be prepared or sorted in any way required, and any auxiliary data structure required built. The time required to prepare the data structures is (within bounds of sanity) unimportant.
The goal is maximum speed on the search.
What algorithm would you use that isn't just a linear search?
Because it says time required to prepare data structures is unimportant, I'd hash it. The key is a string (specifically, a substring), and the value is a list of integers corresponding to indices in the array whose elements have the key as a substring.
To build, take each string in the array and determine all possible substrings of that string, inserting each such key-value pair into the hash table. If the key already exists, append the index to the list rather than inserting/creating a new list.
Once you build this hash table, it's as easy as O(1) grab the list based on the input string and return.
EDIT: Looking more closely at the question, it seems like you'd want to return the actual strings in the array, rather than their indices. The hash table approach will work either way.
You might want to build an index of all string suffixes. Look into suffix trees to find out how this could be done. Wikipedia article might be too generalized so here is an adapted algoritm:
Building index
for each string in array
get all its suffixes (there N suffixes for a string of length N) and store a reference to a string in an ordered associative container (OrderedMap> (index)
Searching
find an lower bound of your search term in index
move over an index starting from lower bound until index key won't stop being prefixed with the search term
a sum of all references you will find is your search result
There is N²/2 substrings for a string of length N but only N suffixes. So suffix based data-structure should be more memory effective that substring based.
I have a DAO method which accepts as a parameter a HashMap.
The key in the HashMap is the name of the column to be searched.
The String[] contains the potential values to be searched for.
If the String array is of length 1, the where clause matches the exact value in the string array.
If the String array is of length 2, the where clause looks for all values BETWEEN the two values in the string array.
Ibatis does not seem to have a mechanism for determining the length of an array in the sqlMap xml file (since length of arrays does not have a JavaBean getter/setter).
Is it possible in ibatis to conditionally detect the length of the string array and adjust the query to either an exact match or a BETWEEN statement?
For example, if the key is age and the value is 18, then all 18 year old users are returned. However, if the key is age and the values are 18 and 23, then all users between 18 and 23 are returned.
The isEqual tag initially looked quite promising, but it does not work with array.length since it does not have a getter/setter. Can ibatis determine the length of an array in the sqlMap?
Thanks.
I wrote some java code to convert my HashMap to have a List instead of String[] and ibatis worked much better with the List.size method.
I am encountering difficulty in retrieving data from my table. I am using Amazon Dynamo DB and I have successfully populated my table. When I scan the table or use getItem, the returning information is of type AttributeValue. I have looked through the documentation and I can't find how you should process an AttributeValue to get it to become an int or string. The example code of scan from the Amazon Website has the information returned in a Dictionary object, but it is a dictionary with strings mapped to Attribute Values. Do you know of anyway to query a Dynamo DB table and store the result in something where strings are mapped to string or strings are mapped to integers?
Assuming you are using the AWS SDK for Java, objects of Class AttributeValue can be of type String, Number, StringSet, NumberSet and the class features respective getters/setters accordingly, e.g.:
public String getN() - Numbers are positive or negative exact-value decimals and integers. A number can have up to 38 digits precision and can be between 10^-128 to 10^+126.
public String getS() - Strings are Unicode with UTF-8 binary encoding. The maximum size is limited by the size of the primary key (1024 bytes as a range part of a key or 2048 bytes as a single part hash key) or the item size (64k).
Please note that the return value of getN() is still a string and must be converted by your Java string to number conversion method of choice accordingly. This implicit weak typing of the DynamoDB data types retrieval/submission based on String parameters only is a bit unfortunate and doesn't exactly ease developing, see e.g. my answer to Error in batchGetItem API in java for such an issue.
Good luck!
I am using character varying data type in PostgreSQL.
I was not able to find this information in PostgreSQL manual.
What is max limit of characters in character varying data type?
Referring to the documentation, there is no explicit limit given for the varchar(n) type definition. But:
...
In any case, the longest possible
character string that can be stored is
about 1 GB. (The maximum value that
will be allowed for n in the data type
declaration is less than that. It
wouldn't be very useful to change this
because with multibyte character
encodings the number of characters and
bytes can be quite different anyway.
If you desire to store long strings
with no specific upper limit, use text
or character varying without a length
specifier, rather than making up an
arbitrary length limit.)
Also note this:
Tip: There is no performance
difference among these three types,
apart from increased storage space
when using the blank-padded type, and
a few extra CPU cycles to check the
length when storing into a
length-constrained column. While
character(n) has performance
advantages in some other database
systems, there is no such advantage in
PostgreSQL; in fact character(n) is
usually the slowest of the three
because of its additional storage
costs. In most situations text or
character varying should be used
instead.
From documentation:
In any case, the longest possible character string that can be stored is about 1 GB.
character type in postgresql
character varying(n), varchar(n) = variable-length with limit
character(n), char(n) = fixed-length, blank padded
text = variable unlimited length
based on your problem I suggest you to use type text. the type does not require character length.
In addition, PostgreSQL provides the text type, which stores strings of any length. Although the type text is not in the SQL standard, several other SQL database management systems have it as well.
source : https://www.postgresql.org/docs/9.6/static/datatype-character.html
The maximum string size is about 1 GB. Per the postgres docs:
Very long values are also stored in background tables so that they do not interfere with rapid access to shorter column values. In any case, the longest possible character string that can be stored is about 1 GB. (The maximum value that will be allowed for n in the data type declaration is less than that.)
Note that the max n you can specify for varchar is less than the max storage size. While this limit may vary, a quick check reveals that the limit on postgres 11.2 is 10 MB:
psql (11.2)
=> create table varchar_test (name varchar(1073741824));
ERROR: length for type varchar cannot exceed 10485760
Practically speaking, when you do not have a well rationalized length limit, it's suggested that you simply use varchar without specifying one. Per the official docs,
If you desire to store long strings with no specific upper limit, use text or character varying without a length specifier, rather than making up an arbitrary length limit.