Is 43679 a magic number? [closed] - winforms

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Recently I was investigating this question: Storing long values in DataGridView C#. After some tests I found out that the DataGridViewCell will not display any text with a length larger than 43679 characters, even if the value is actually stored in the cell and you can access it programmatically.
Actually, this has also been reported as a bug for SQL Server Management Studio: SSMS - Can not paste more than 43679 characters from a column in Grid Mode.
I guess that the limit is intended to prevent any overload on the UI rendering. But my questions are:
Why this exact value is being used?
Is there any documentation for this limit?

If convert 43679 to hex then it's 0xAAA0. Or in inverse form 0xAAA0 = 0xFFFF - 0x555F.
0x555F is code of Chinese Unicode symbol 'open; begin'. May be it is a Chinese message or some kind of Chinese magic)

Related

What is the difference between a row, record and tuple? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am studying in a database development course at the moment and I am having trouble getting my head this!
My course notes describe a tuple as:
A tuple is a row of a relation
From what I have understood since working with MySQL you search for row(s). Or when browsing through a database you are looking through rows in a table.
And from what I understood a record is information within a row.
Is there any distinct differences between the three?
I know someone has posted something similar but I couldn't really understand his answer.
Thanks for all help in advnce!
Peter
In your context they are different words to mean exactly the same thing.
A tuple, in general, means an ordered list with possibly repeated elements (as contrasted to a set, which has all unique elements and is not ordered)
They are the same.
A row—also called a record or tuple—represents a single, implicitly structured data item in a table.
They mean exactly same thing: tuple, rows or records.
Your SELECT query will generate results that may contain 0 or more rows/records or tuples.
A SELECT query can span 1 or more tables

Is there any advantage to limiting the length of a password if the password is stored as a hash? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've seen a lot of sites that limit the length of a password to something like 10 or 12 characters. I understand that that could be a sign that they are storing the password in plain text and they limit the length because they think it will save space, but if they are storing a password as a hash, is there any advantage to this limit?
Edit: I am quite aware that longer passwords are stronger and that a hash is a hash, and the length is the same regardless of the input. My real question here is is there some sort of convoluted reason that system designers use to rationalize this inherently insecure practice?
A hash function takes an input of unspecified length, and returns a value of specified length. So no, the length of the input has no effect on the length of the output, as the length of the output is always the same for a given hash function.
Putting lower bounds on the length of passwords users can use is only ever to encourage users to use stronger passwords. Upper bounds, I couldn't say. Could be something against spam-bots, or they don't want to have to crunch a 200-character password for performance reasons.
And nobody stores passwords in plaintext.
No. Not much of an advantage if they're storing it plain-text, either, really, in real life.
To directly answer your question, no, there is no benefit to deliberately implementing a low maximum length. What you'll find is that this ofen happens when there's a legacy dependency; your password can only be 10 chars long because it's back-ending into a system which implements this limit.
I suspect this is the case in scenarios such as Tesco's. You've got 13 year old plus system and, as you say, it's (allegedly) storing passwords in the clear and there are possibly multiple points where that 10 chat limit is implemented (DB col, SQL command params, etc).
The only reason I can possibly think of - and it's a stretch - is that text box with no lmit could allow the maximum request length to be exceeded but we're talking ridiculously long passwords here.

Please help me to find answer for some SQL questions [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have few questions which i have been asked in interview:
Performance difference between delete and truncate?
Delete duplicate data from a table which is not having any id column and should not use CTE.
Why we are able to delete data using CTE?
DELETE logs each individual deletion, whereas TRUNCATE is a bulk logged operation, hence is faster.
You could SELECT DISTINCT data into a temp table, TRUNCATE the first then reinsert.
Not a scooby...
here are some pointers to solve your issues:
Since TRUNCATE doesn't actually delete data, but deallocate the data by removing pointers to the indexes it will be much faster than DELETE, when you use DELETE everything is stored in the transaction log row by row, hence it's much slower.
http://www.codeproject.com/Tips/159881/How-to-remove-duplicate-rows-in-SQL-Server-2008-wh
http://blog.sqlauthority.com/2009/06/23/sql-server-2005-2008-delete-duplicate-rows/

Normalizing too much vs too little, examples? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I don't usually design databases, so I'm having some doubts about how to normalize (or not) a table for registering users. The fields I'm having doubts are:
locationTown: I plan to normalize for countries, and have a separate table for it, but should I do the same for towns? I guess users would type this in when registering, and not choosing from a dropdown. Can one normalize when the input may be coming from users?
maritalStatus: I would have a choice of about 5 or so different statuses.
Also, does anyone know of a good place to find real world database schema/normalizing examples?
Thanks
locationTown - just store it directly inside user table. Otherwise you will have to search for existing town, taking typos and code case into account. Also some people use non-standard characters and languages (Kraków vs. Krakow vs. Cracow, see also: romanization). If you really want to have a table with towns, at least provide auto-complete box so the users are more likely choosing existing town. Otherwise prepare for lots of duplicates or almost duplicates.
maritalStatus - this in the other hand should be in a separate table. Or more accurately: use single character or a number to represent marital status. An extra table mapping this to human-readable form is just for convenience (remember about i18n) and foreign key constraint makes sure incorrect status aren't used.
I wouldn't worry about it too much - database normalization (3NF, et al) has been over-emphasized in academia and isn't overly practical in industry. In addition, we would need to see your whole schema in order to judge where these implementations are appropriate. Focus on indexing commonly-used columns before you worry about normalization.
You might want to take a look at this SO question before you dive in any further.

Join slows down sql [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
We have a discussion over SQL Server 2008 and join. One half says the more joins the slower you sql runs. The other half says ihat it does not matter because SQL server takes care of business so you wil not notice any performance loss. What is true?
Instead of asking the question the way you have, consider instead:
Can I get the data I want without the join?
No => You need the join, end of discussion.
It is also a matter of degree. It is impossible for a join not to add additional processing. Even if the Query Optimizer takes it out (e.g. left join with nothing used from the join) - it still costs CPU cycles to parse it.
Now if the question is about comparing joins to another technique, such as one special case of LEFT JOIN + IS NULL vs NOT EXISTS for a record in X not in Y scenario, then let's discuss specifics - table sizes (X vs Y), indexes etc.
It will slow it down: the more complicated a query, the more work the database server has to do to execute it.
But about that "performance loss": over what? Is there another way to get at the same data? If so, then you can profile the various options against each other to see which is fastest.

Resources