I have the following questions:
What does SETCURRENTKEY actually do?
What is the benefit of SETCURRENTKEY?
Why would I use SETCURRENTKEY?
When would I use SETCURRENTKEY?
What is the advantage of using an index and how do I tie this analogously to the example of an old sorting system of a library?
What type of database querying efficiency problems does this function solve?
I have been searching all over the internet and the 'IT Pro Developer Help' internal Navision documentation for this poorly documented function and I cannot find a right answer to my questions.
The only thing I know is that SETCURRENTKEY sets the current key for a record variable and it sorts the recordset based on it. When SETCURRENTKEY is used with only a few keys, it can improve query performance. I have no idea what is actually happening when a database uses an index versus not using an index.
Someone told me this is how SETCURRENTKEY works:
It is like the old sorting card system in a library: without SETCURRENTKEY you would have to go through each shelf and manually filter out for the book you want. You would find a mix of random books and you would have to say: "No, not this one. Yes, this one". With SETCURRENTKEY you can have an index analogous to the old system where you would just go to a book or music CD based on its 'Author' or 'Artist' etc.
That's all fine, but I still can't properly answer my questions.
With SETCURRENTKEY you declare the key (table index, which can consist of many fields) to be used when querying database with FINDSET/FINDFIRST/FINDLAST statements, and the order of records you will receive while iterating the recordset with NEXT statement.
Performance. The database server uses the selected key (table index) to retrieve the record set. You are always better off explicitly stating SETCURRENTKEY in your code, as it makes you think along about you database structure and indices required.
Performance, and so that you know ahead the order of records you will receive when iterating through a recordset.
When to use:
The typical use is this:
RecordVar.SETCURRENTKEY(...)
RecordVar.SETRANGE(Field, ...)
RecordVar.SETFILTER(Field, ...)
RecordVar.SETRANGE(Field, ...)
...
IF RecordVar.FINDSET THEN REPEAT
// do something with records
UNTIL RecordVar.NEXT = 0;
SETCURRENTKEY is declarative, and comes into effect only when FINDSET is executed. At the moment FINDSET is executed, the database will be queried on the table represented by RecordVar, using the filters declared by SETRANGE/SETFILTER, and the key/index declared by SETCURRENTKEY.
For 5. and 6. and generally, I would truly reccomend you to familiarize yourself with basic database index theory. This is what it is, pretty well explained by yourself using the library/book analogy.
If modifying key fields (or filtered fields, even if not in the key) in a loop, the standard way to do this in NAV is to declare a second record variable, do a GET on it using the primary key fields from the record variable you are looping through, then change and MODIFY the second record variable.
Related
Imagine a web form with a set of check boxes (any or all of them can be selected). I chose to save them in a comma separated list of values stored in one column of the database table.
Now, I know that the correct solution would be to create a second table and properly normalize the database. It was quicker to implement the easy solution, and I wanted to have a proof-of-concept of that application quickly and without having to spend too much time on it.
I thought the saved time and simpler code was worth it in my situation, is this a defensible design choice, or should I have normalized it from the start?
Some more context, this is a small internal application that essentially replaces an Excel file that was stored on a shared folder. I'm also asking because I'm thinking about cleaning up the program and make it more maintainable. There are some things in there I'm not entirely happy with, one of them is the topic of this question.
In addition to violating First Normal Form because of the repeating group of values stored in a single column, comma-separated lists have a lot of other more practical problems:
Can’t ensure that each value is the right data type: no way to prevent 1,2,3,banana,5
Can’t use foreign key constraints to link values to a lookup table; no way to enforce referential integrity.
Can’t enforce uniqueness: no way to prevent 1,2,3,3,3,5
Can’t delete a value from the list without fetching the whole list.
Can't store a list longer than what fits in the string column.
Hard to search for all entities with a given value in the list; you have to use an inefficient table-scan. May have to resort to regular expressions, for example in MySQL:
idlist REGEXP '[[:<:]]2[[:>:]]' or in MySQL 8.0: idlist REGEXP '\\b2\\b'
Hard to count elements in the list, or do other aggregate queries.
Hard to join the values to the lookup table they reference.
Hard to fetch the list in sorted order.
Hard to choose a separator that is guaranteed not to appear in the values
To solve these problems, you have to write tons of application code, reinventing functionality that the RDBMS already provides much more efficiently.
Comma-separated lists are wrong enough that I made this the first chapter in my book: SQL Antipatterns, Volume 1: Avoiding the Pitfalls of Database Programming.
There are times when you need to employ denormalization, but as #OMG Ponies mentions, these are exception cases. Any non-relational “optimization” benefits one type of query at the expense of other uses of the data, so be sure you know which of your queries need to be treated so specially that they deserve denormalization.
"One reason was laziness".
This rings alarm bells. The only reason you should do something like this is that you know how to do it "the right way" but you have come to the conclusion that there is a tangible reason not to do it that way.
Having said this: if the data you are choosing to store this way is data that you will never need to query by, then there may be a case for storing it in the way you have chosen.
(Some users would dispute the statement in my previous paragraph, saying that "you can never know what requirements will be added in the future". These users are either misguided or stating a religious conviction. Sometimes it is advantageous to work to the requirements you have before you.)
There are numerous questions on SO asking:
how to get a count of specific values from the comma separated list
how to get records that have only the same 2/3/etc specific value from that comma separated list
Another problem with the comma separated list is ensuring the values are consistent - storing text means the possibility of typos...
These are all symptoms of denormalized data, and highlight why you should always model for normalized data. Denormalization can be a query optimization, to be applied when the need actually presents itself.
In general anything can be defensible if it meets the requirements of your project. This doesn't mean that people will agree with or want to defend your decision...
In general, storing data in this way is suboptimal (e.g. harder to do efficient queries) and may cause maintenance issues if you modify the items in your form. Perhaps you could have found a middle ground and used an integer representing a set of bit flags instead?
Yes, I would say that it really is that bad. It's a defensible choice, but that doesn't make it correct or good.
It breaks first normal form.
A second criticism is that putting raw input results directly into a database, without any validation or binding at all, leaves you open to SQL injection attacks.
What you're calling laziness and lack of SQL knowledge is the stuff that neophytes are made of. I'd recommend taking the time to do it properly and view it as an opportunity to learn.
Or leave it as it is and learn the painful lesson of a SQL injection attack.
I needed a multi-value column, it could be implemented as an xml field
It could be converted to a comma delimited as necessary
querying an XML list in sql server using Xquery.
By being an xml field, some of the concerns can be addressed.
With CSV: Can't ensure that each value is the right data type: no way to prevent 1,2,3,banana,5
With XML: values in a tag can be forced to be the correct type
With CSV: Can't use foreign key constraints to link values to a lookup table; no way to enforce referential integrity.
With XML: still an issue
With CSV: Can't enforce uniqueness: no way to prevent 1,2,3,3,3,5
With XML: still an issue
With CSV: Can't delete a value from the list without fetching the whole list.
With XML: single items can be removed
With CSV: Hard to search for all entities with a given value in the list; you have to use an inefficient table-scan.
With XML: xml field can be indexed
With CSV: Hard to count elements in the list, or do other aggregate queries.**
With XML: not particularly hard
With CSV: Hard to join the values to the lookup table they reference.**
With XML: not particularly hard
With CSV: Hard to fetch the list in sorted order.
With XML: not particularly hard
With CSV: Storing integers as strings takes about twice as much space as storing binary integers.
With XML: storage is even worse than a csv
With CSV: Plus a lot of comma characters.
With XML: tags are used instead of commas
In short, using XML gets around some of the issues with delimited list AND can be converted to a delimited list as needed
Yes, it is that bad. My view is that if you don't like using relational databases then look for an alternative that suits you better, there are lots of interesting "NOSQL" projects out there with some really advanced features.
Well I've been using a key/value pair tab separated list in a NTEXT column in SQL Server for more than 4 years now and it works. You do lose the flexibility of making queries but on the other hand, if you have a library that persists/derpersists the key value pair then it's not a that bad idea.
I would probably take the middle ground: make each field in the CSV into a separate column in the database, but not worry much about normalization (at least for now). At some point, normalization might become interesting, but with all the data shoved into a single column you're gaining virtually no benefit from using a database at all. You need to separate the data into logical fields/columns/whatever you want to call them before you can manipulate it meaningfully at all.
If you have a fixed number of boolean fields, you could use a INT(1) NOT NULL (or BIT NOT NULL if it exists) or CHAR (0) (nullable) for each. You could also use a SET (I forget the exact syntax).
I am trying to get a vendor to create an index on a progress 10.2b database to aid in migrating data from said database, however the vendor is reluctant to create an index, saying it could impact data integrity. There response is below. Is their any truth/merit in what is being said?
There are a number of reasons we will not add indices but the main
reason is, as you have outlined, Progress selects the index it uses
based on the parameters in the query. So for example if we had code
that does the following:
Find first record where a= 1 and b = 2
As the existing index stands this would find the record using index
‘M’ and it would find record ‘X’
If we add a new index to the table there is a chance that this code
could decide to use the new index to find the record and return record
‘Y’ instead.
Sure creating indices is a core part of any database, but proper
development practices would require heaps of testing before applying
an index change to a product system. Without testing, the integrity
of the system cannot be guaranteed.
So my thoughts on this are:
Progress selects the index it uses based on the parameters in the query
Isn't this how any database usually selects the index? Based on the required columns/where clause, it can decide the appropriate index (if any) available.
If we add a new index to the table there is a chance that this code
could decide to use the new index to find the record and return record
‘Y’ instead.
To me, it almost sounds like they have programmed their program to rely on "grabbing the first record out of the database". If it was to use an index, then sure, it might order the results differently if no order by has been specified. If this is the case, then that is just poor programming.
I pretty much agree with you:
To me, it almost sounds like they have programmed their program to rely on "grabbing the first record out of the database". If it was to use an index, then sure, it might order the results differently if no order by has been specified. If this is the case, then that is just poor programming.
If they wrote their query correctly, an index doesn't change the result — just the speed. If they left the order by and just rely that the index has the right order anyway, another index could cause problems.
However, to emphasis this: The bug is the query then.
I've been trying to learn programming for a while. I've studied Java and Python, and I'm comfortable with their syntax. Recently, I wanted to use what I've learnt with coding a tangible software from ground up.
I want to implement a database engine, sort of a NoSQL database. I've put together a small document, sort of a specification to follow throughout my adventure of coding it. But all I know is a bunch of keywords. I don't know where to start.
Can someone help me find out how to gather the knowledge I need for this kind of work and in what order to learn things? I have searched for documents, but I feel like I'll end up finding unrelated/erroneous content or start from a wrong point, because implementing a complete database engine is (seeming to be) a truly complicated task.
I wan't to express that I'd prefer theses and whitepapers and (e)books to codes of other projects, because I've asked a question of kind in which people usually get answered in the form of "read project - x' source code". I'm not at the level of comfortably reading and understanding source code.
First, you may have a look that the answers for How to write a simple database engine. While it focus on a SQL engine, there is still a lot of good material in the answers.
Otherwise, a good project tutorial is Implementation of a B-Tree Database Class. The example code is in C++, but the description of what is done and why is probably what you'll want to look at anyway.
Also, there is Designing and Implementing Structured Storage (Database Engine) over at MSDN. Plenty of information there to help you in your learning project.
Because the accepted answer only offers (good) links to other resources, I'd thought I share my experience writing webdb, a small experimental database for browsers. I also invite you to read the source code. It's pretty small. You should be able to read through it and get a basic understanding of what it's doing in a couple of hours. Warning: I am a n00b at this and since writing it I learned a lot more about it and see I have been doing some things wrong. It can help you get started though.
The basics: BTree
I started out with adapting an AVL tree to suit my needs. An AVL tree is a kind of self-balancing binary search tree. You store the key K and related data (if any) in a node, then all items with key < K in a node in the left subtree and all items with key > K in a right subtree. You can use an array to store the data items if you want to support non unique keys.
This tree will give you the basics: Create, Update, Delete and a way to quickly get an item by key, or all items with key < x, or with key between x and y etc. It can serve as the index for our table.
A schema
As a next step I wrote code that lets the client code define a schema. Methods like createTable() etc. Schemas are typically associated with SQL, but even no-SQL sort-of has a schema; they usually require you to mark the ID field and any other fields you want to search on. You can make your schema as fancy as you want, but you typically want to model at least which column(s) serve as primary key and which fields will be searched on frequently and need an index.
Creating a data structure to store a table
I decided to use the tree I created in the first step to store my items. These were simple JS objects. Having defined which field contains the PK, I could simply insert the item into the tree using that field's value as the key. This gives me quick lookup by ID (range).
Next I added another tree for every column that needs an index. In these trees I did not store the full record, but only the key. So to fetch a customer by last name, I would first use the index on last name to get the ID, then the primary key index to get the actual record. The reason I did not just store the (reference to the) actual object is because it makes set operations a little bit simpler (see next step)
Querying
Now that we have a table with indexes for PK and search fields, we can implement querying. I did not take this very far as it becomes complicated quickly, but you can get some nice functionality with just some basics. WebDB does not implement joins; all queries operate only on a single table. But once you understand this you see a pretty clear (though long and winding) path to doing joins and other complicated stuff as well.
In WebDB, to get all customers with firstName = 'John' and city = 'New York' (assuming those are two search fields), you would write something like:
var webDb = ...
var johnsFromNY = webDb.customers.get({
firstName: 'John',
city: 'New York'
})
To solve it, we first do two lookups: we get the set X of all IDs of customers named 'John' and we get the set Y of all IDs of customers from New York. We then perform an intersection on these two sets to get all IDs of customers that are both named 'John' AND from New York. We then run through our set of resulting IDs, getting the actual record for each one and adding it to the result array.
Using the set operators like union and intersection we can perform AND and OR searches. I only implemented AND.
Doing joins would (I think) involve creating temporary tables in memory, then populating them as the query runs with the joined results, then applying the query criteria to the temp table. I never got there. I attempted some syncing logic next but that was too ambitious and it went downhill from there :)
I have a table of data that I'm trying to "standardize". The data entered into the table wasn't static or standardized (like with drop-down lists of answers), leaving me with multiple variations of answers where I want a static, universal answer.
For instance, let's say that there's a column in the database called "Type of pet". Because user input wasn't standardized, people could enter in variations of a specific type of pet, rather than generalized form of the pet. So instead of just entering "Dog", there are different versions of dogs like "Collie", "Mutt", "Labrador", etc.
How do I go about transcribing these answers into their generalized form -- replacing Collie/Mutt/Labrador/etc answers in the table with just "Dog" (or "Cat", or "Bird", etc.)?
I realize there needs to be some form of a manually-entered "translation" function. My gut reaction is that a long-spanning list of stacked if-statements would be inefficient, as well as being tedious to control and expand.
Is there some kind of process or system for doing something like this? Like some type of lookup table system/matrix?
I'm assuming a foreach loop to iterate through the array of records would be most appropriate. And then within each iteration of the foreach loop, you'd have it do a test/comparison of the pet variable against some type of list (that I would have created manually) -- but what would you use for this lookup table/list? Or this step of the process? Would you have it as some type of a SQL database/table, an array, a CSV file, etc.?
Then, once this comparison is completed and the "translated" equivalent of the type of pet is determined, the foreach loop would update that specific row of the record, either overwriting the old non-standardized value, or perhaps just tacking on the new standardized equivalent into a new column (for later verifying).
My gut reaction is that a long-spanning list of stacked if-statements
would be inefficient, as well as being tedious to control and expand.
100% correct, and because of this you really only have one option: Manually go through the database and clean it up. Once that is done you will need to restrict user input using stop down lists rather than raw text input.
Depending on your users you might want to look at how Stackoverflow does tags - essentially allowing anyone to do the cleanup for you.
But if you have like 150000 records or something doing an SQL find-replace query might help clean up the data to start.
This sounds like a data normalization project to me though I don't have alot of experience with it in practice, but in theory you start with how the data is entered. For instance, free text fields allow users to enter anything they want. You'd want to change that after scrubbing the data. And it pays to know how the data got in in the first place. Was it freetext, a bullet, a drop-down menu? etc.
You'd also want to create a data dictionary of all the standardized terms that can replace the multitude of variations with.
Then you could create an update query that would go through the old data and update it with the new using an update query and wildcards.
https://support.office.com/en-us/article/Use-the-Find-and-Replace-dialog-box-to-change-data-2eee8d02-5a40-4328-ba56-ec0406865680
This could be a more automated way of scrubbing the data rather than find and replace too.
-Al
I have some question regarding database performance in general. I'm using Sqlite but I assume that the performance remarks are applicable to all relational databases?
I have a database that contains a table that stores data of about 200 variables. I write about 50 variables per second to the table. A writen variable contains the id of the variable, a value and a timestamp. Readig is done very rarely but needs to be as fast as possible to get the data per variable in chronological order. When I do a query I always just need to get the data of 1 variable.
How do I design the database so the reading is as fast as possible:
1. I make 1 tabel that contains all the
variables. The variable is stored as
an id. I index the table on the id
and timestamp. The bad part is that
the index makes the write slowe(r).
2. I make 200 tables for each variable
and index the timestamp.
I think the second solution is the most performant but creaying a table for each variable doesn't seem right. Someone can give me some advice?
Thanks
If you really want to use a database, use the first approach, but make sure you are inserting your data in a single transaction; benchmarks show it makes writing much faster.
Are your searches performed on variable name/id AND timestamp, or variable name only. Indexing on timestamp may not be necessary...
Are you sure you need a database? By the sounds of it, a flat-file will work well enough for you, and you don't sound like you actually need any of the trappings of a database. Just create a flat-file for each variable and keep handles to each open. Write to them through your standard buffered IO as often as you need. To read, just open one file and deserialize.
If you are using a relational database, I am guessing those variables are all related? If they are just values, for instance, settings, then maybe a file or something similar may be better.
If you only ever have to query values for ONE variable, then, if you insist on using a database (which may not be a bad thing!), then you should create one table per variable:
id (unsigned int, auto-increment, primary key)
timestamp (datetime)
variable (whatever it is supposed to be)
Do not skimp on data just because "it might take more room on the hard drive" - that only leads to trouble.