I was trying to compare two tables, and I kept getting values that looked empty, but where not.
SET #secret_message = CHAR(0) + 'Hello World!';
SELECT #secret_message, RIGHT(#secret_message,12)
From what I can tell, SSMS stops reading the values if they start with a NULL.
Is this a bug or feature?
The bytes are there; it's the code to display the results that isn't showing them.
When SQL Server stores string data, it knows how many characters there are, regardless of the contents. Similarly, newer programming environments tend to encode strings by including the length as it's own field (usually the first field as fixed-length int)
However, older string handling libraries — especially those from C/C++, which in turn underpin much of our core operating systems and GUI desktop platforms — instead used \0-terminated strings. These libraries will walk the data until they find the \0 character and then stop.
So what we have here is SQL Server returning data that looks like this:
\0Hello World!
Which is turn (properly!) rendered in an old-school string like this:
\0Hello World!\0
And therefore finally looks like this when we actually go to display it:
Related
I am trying to create a new reference containing another reference as in ${var${randnum}}.
Ultimately, I want to create a variable which refers to a two times two randomized set of variables.
As the above approach did not work, I developed it further with below result.
In the calculate field I write
concat('$','{','trust',${rand_no2},'_' ,${rand_no3_1},'}')
Which should result in
${trust1_1}
and respective combinations.
Without line 11 (name=ref2) the file compiles and I can start it in ODK Collect (v.2.4) on my phone. When I reach line 10 (in ODK Collect), however, I receive the message:
"Error Occured
Dependency cycle in s; recursion limit exceeded!!"
(I included line 11 to show what I want to do in the end.)
I am writing the file in Excel and compile it with ODK xlsform offline. (For testing I transfer it via cable to my phone.)
The xls file for reproduction can be found here:
https://forum.getodk.org/t/concatenate-references-to-create-new-reference-var-randnum/34968
Thank you very much in advance!
You're mixing up some things related to the ${q} syntax, question names and question values.
Note that ODK Collect does not actually understand the ${q} syntax (which is XLSForm-only). It's helpful to look at the actual form format that ODK collect understands which is called XForm, an XML format that XLSForm converts into. However, even if ODK Collect understood the ${q} syntax, your approach still wouldn't work since you're creating a string value for the ref question (using concat). This wouldn't magically be evaluated as a reference / formula. You cannot dynamically create a reference or formula.
At the moment (until ODK supports something like the local-name() function), maybe the best approach is to use position and put the calculated values inside a group. Something like //group/calc[number(${pos})] perhaps. Note that positions are 1-based (so the first item is position 1) and casting the position to a number or integer is required.
I am in the process of rolling over a bunch of old stored procedures that take NVARCHAR(MAX) strings of comma and/or semicolon separated values (never mind about one value per variable etc.). The code is currently using the CHARINDEX approach described in this question, though in principle any of the approaches would work (I'm tempted to replace it with the XML one, because neatness).
The question, though, is what is the most efficient may of handing escaped delimiters? Obviously the lowest level approach is a character by character parser, but I can't shake the feeling that (1) that's going to be horrible when executed a million times in close succession and (2) it'll be overcomplicated for the situation.
Basically, I want to handle 3 possible escapes:
"\\", "\,", and "\;" somewhere in my string. What's the best way to do it? I should add that, ideally, I don't want to make any assumptions about what characters are included in the string.
Sample data would look something like the below.
Value1,Value\,2,ValueWithSlashAtTheEnd\\,ValueWithSlashAndCommaAtTheEnd\\\,
I'm actually splitting to rows rather than columns, but the principle is the same; I'd expect the below output typically:
SomeName
^^^^^^^^
Value1
Value,2
ValueWithSlashAtTheEnd\
ValueWithSlashAndCommaAtTheEnd\,
Needless to say, the escapes could occur anywhere in a value, and ideally I'd like to handle for semicolons as well, but I'll probably be able to infer that from the comma behaviour.
Just provide your function edited string:
replace(replace(#yourstring, '\\', '^'), '\,', '#')
Then replace back:
replace(replace(#returnedstring, '#', ','), '^', '\')
Replace ^ and # with any characters that are not on the string.
i have a buffer sized 2000, the data to be inserted is unlimited. I want, data more than 2000 should be added from the end of the buffer, i.e. push all data from right to left and insert new data at the end of the buffer. So, what kind of algorithm or flow i should try on?
You want to use a FIFO, or 'Circular Buffer'. See http://en.wikipedia.org/wiki/Circular_buffer for a complete explanation, or even example code.
Depending on your actual needs, the implementation can be different. If, for example, you always need to access the 2000 items sequentially, you can omit the read pointer (as it is always one item behind the write pointer).
Edit: Queue is something similar. If you are using C++, consider http://www.cplusplus.com/reference/stl/queue/
I have created a fulltext catalog that stores the data from some of the columns in a table, but the contents seem to have been split apart by characters that I don't really want to be considered word delimiters. ("/", "-", "_" etc..)
I know that I can set the language for word breaker, and http://msdn.microsoft.com/en-us/library/ms345188.aspx gives som idea on how to install new languages - but I need more direct control than that, because all of those languages still break on the characters I want to not break on.
Is there a way to define my own language to use for finding word breakers?
Full text indexes only consider the characters _ and ` while indexing. All the other characters are ignored and the words get split where these characters occur. This is mainly because full text indexes are designed to index large documents and there only proper words are considered to make it a more refined search.
We faced a similar problem. To solve this we actually had a translation table, where characters like #,-, / were replaced with special sequences like '`at`','`dash`','`slash`' etc. While searching in the full text, u've to again replace ur characters in the search string with these special sequences and search. This should take care of the special characters.
The ability to configure FTS indexing is fairly limited out of the box. I don't think that you can use languages to do this.
If you are up for a challenge, and have access to some C++ knowledge, you can always write a custom IFilter implementation. It's not trivial, but not too difficult. See here for IFilter resources.
I'm using the libpq library in C to accessing my PostgreSQL database. So, when I do res = PQexec(conn, "SELECT point FROM test_point3d"); I don't know how to convert the PGresult I got to my custom data type.
I know I can use the PQgetValue function, but again I don't know how to convert the returning string to my custom data type.
The best way to think about this is that data types interact with applications over a textual interfaces. Libpq returns a string from just about anything. The programmer has a responsibility to parse the string and create a data type from it. I know the author has probably abandoned the question but I am working on something similar and it is worth documenting a few important tricks here that are helpful in some cases.
Obviously if this is a C language type, with its own in and out representation, then you will have to parse the string the way you would normally.
However for arrays and tuples, the notation is basically
[open_type_identifier][csv_string][close_type_identifier]
For example a tuple may be represented as:
(35,65,1111111,f,f,2011-10-06,"2011-10-07 13:11:24.324195",186,chris,f,,,,f)
This makes it easy to parse. You can generally use existing csv processers once you trip off the first and last character. Moreover, consider:
select row('test', 'testing, inc', array['test', 'testing, inc']);
row
-------------------------------------------------
(test,"testing, inc","{test,""testing, inc""}")
(1 row)
As this shows you have standard CSV escaping inside nested attributes, so you can, in fact, determine that the third attribute is an array, and then (having undoubled the quotes), parse it as an array. In this way nested data structures can be processed in a manner roughly similar to what you might expect with a format like JSON. The trick though is that it is nested CSV.