I have 3 tables: A,B,C.
A consists of a column D.
B consists of columns E,F,G,H,I,J (PK is J).
C consists of foreign key K to table B.
now I need to have F,G,H unique, but if G is null then have F,H unique and I and E unique. (and G XOR I must be null).
is there a way I can do it in db and not programatically?
Thanks.
I'm pretty sure you can do this with unique indexes and the fact that NULL is ignored for a unique index.
First, create an index on F, G, and H:
create unique index idx_b_f_g_h on b(f, g, h)
This handles the "F, G, H unique" case. To handle the "G is null, then F, H unique" do:
create unique index idx_b_f_h_j on b(f, h, (case when G is null then 0 else j end));
This replaces a non-NULL values of G with the primary key -- ensuring uniqueness. It uses an arbitrary "constant" value when G is null. (Note the constant should be of the same type as the primary key.)
To handle, I and E unique, you can also use a functional index. I think you mean:
create unique index idx_b_i_e_j on b(coalesce(i, e));
You can handle the fact that i or e is NULL using a check constraint.
Related
How do you define 1 to many relationship between two tables in Power BI, when multiple columns are involved.
I.E. Column A, B, and C in table 1 tied to Column E, F, and G in table 2, with a 1 to many relationship.
You can create a new column concatenating the fields like:
tied01 = CONCATENATE( CONCATENATE(your_dataset[Column A], your_dataset[Column B]), your_dataset[Column C])
And to
tied02 = CONCATENATE( CONCATENATE(your_dataset[Column E], your_dataset[Column F]), your_dataset[Column G])
Once you have these two new columns, you can merge them to only one column or create a dynamic table making the reference for the new created columns
Database: Netezza (Let's say I have 10 rows in a source table)
I have source table where for e.g. Columns A, Column B, Column C, column D
has same values but different values in column E and column F in for example 6 rows.
And Also I have rows where values in Column A, Column B, column C, Column D, column E, Column
F are different.
Now I need a query which gives output for rows where above 4 columns
has same value should display in one line and two columns E and F with different values
Should display as single column concatenated (value of E, Value of F)
Select columnA, column B, column C, column D, column (E, F) from ;
And rows where values in Column A, Column B, column C, Column D, column E, column F are
Display in different rows as it is (since these records are unique).
I hope you guys understand my questions.
I need help to get output as mentioned above.
Here's the problem. Let it be known that I'm very new to Haskell and the declarative language part is totally different from what I'm used to. I've made a database of sorts, and the user can input commands like "Add (User "Name")" or "Create (Table "Funding")". I'm trying to create a function that takes as parameters a list of commands, a User, a Table, a Column name (as a string), and returns a list containing the values in that column if the user has access to them (i.e. somewhere in the list of commands there is one that matches "Allow (User name) (Table "Funds")". We can assume the table exists.
module Database where
type Column = String
data User = User String deriving (Eq, Show)
data Table = Table String deriving (Eq, Show)
data Command =
Add User
| Create Table
| Allow (User, Table)
| Insert (Table, [(Column, Integer)])
deriving (Eq, Show)
-- Useful function for retrieving a value from a list
-- of (label, value) pairs.
lookup' :: Column -> [(Column, Integer)] -> Integer
lookup' c' ((c,i):cvs) = if c == c' then i else lookup' c' cvs
lookupColumn :: [(Column, Integer)] -> [Integer]
lookupColumn ((c, i):cvs) = if null cvs then [i] else [i] ++ lookupColumn cvs
select :: [Command] -> User -> Table -> Column -> Maybe [Integer]
select a b c d = if not (elem (b, c) [(g, h) | Allow (g, h) <- a])
then Nothing
else Just (lookupColumn [(d, x) | Insert (c, [ (d, x ), _ ]) <- a])
I have gotten it to work, but only in very select cases. Right now, the format of the input has to be such that the column we want the values from must be the first column in the table. Example input is below. Running: select example (User "Alice") (Table "Revenue") "Day" returns Just [1,2,3] like it should, but replacing Day with Amount doesn't work.
example = [
Add (User "Alice"),
Add (User "Bob"),
Create (Table "Revenue"),
Insert (Table "Revenue", [("Day", 1), ("Amount", 2400)]),
Insert (Table "Revenue", [("Day", 2), ("Amount", 1700)]),
Insert (Table "Revenue", [("Day", 3), ("Amount", 3100)]),
Allow (User "Alice", Table "Revenue")
]
A bit of explanation about the functions. select is the function which should return the list of integers in that column. Right now, it's only matching the first column, but I'd like it to work with any number of columns, not knowing which column the user wants ahead of time.
[(d, x) | Insert (c, [ (d, x ), _ ]) <- a] returns a list of tuples that match only the first tuple in each list of (Column, Integer) tuples.
lookupColumn takes in a list of tuples and returns a list of the integers within it. Unlike lookup', we know that the list this takes in has only the correct column's (Column, Integer) tuples within it. lookup' can take in a list of any number of tuples, but must check if the column names match first.
Any help at all would be greatly appreciated.
There are a couple strange things in your code; for example:
lookupColumn :: [(Column, Integer)] -> [Integer]
lookupColumn ((c, i):cvs) = if null cvs then [i] else [i] ++ lookupColumn cvs
is much longer to type in every way than the equivalent (and probably faster) map snd.
Furthermore when you're defining your own data structures often tuples are superfluous; you could just write:
data Command = Add User
| Create Table
| Allow User Table
| Insert Table [(Column, Integer)]
deriving (Eq, Show)
The actual problem is the _ in your select statement which explicitly tells Haskell to throw away the second value of the tuple. Instead you want something which grabs all (Column, Integer) pairs that are associated with a table:
getCells :: [Command] -> Table -> [(Column, Integer)]
getCells db t = concat [cis | Insert t' cis <- filter isInsert db, t == t']
where isInsert (Insert _ _) = True
isInsert _ = False
(note that this is using the un-tupled version of Insert that I wrote above). With this the algorithm becomes much easier:
select :: [Command] -> User -> Table -> Column -> Maybe [Integer]
select db user table col
| Allow user table `elem` db = Just [i | (c, i) <- getCells db t, col == c]
| otherwise = Nothing
What's doing the majority of the "work" here? Actually it's just the concat :: [[a]] -> [a] that we used in getCells. By concatenating together all of the (Column, Integer) pairs for all of the rows/cols in the table, we have a really easy time of pulling out only the column that we need.
Todo: stop this code from doing something unexpected when someone says Insert (Table "Revenue") [("Amount", 1), ("Amount", 2400)], which will appear in the output as two rows even though it only comes from one row. You can either normalize-on-input, which will do pretty well, or return [Maybe Integer], giving nulls for the rows which do not have a value (lookup in the standard Prelude will take the place of concat in doing your work for you).
I have this code, that uses PBC Library:
element_t pk, pk_temp;
element_init_G2(pk, pairing);
element_init_G2(pk_temp, pairing);
element_init_Zr(ci, pairing);
element_pow_zn(pk_temp, pk_temp, ci);
element_set1(pk);
element_add(pk, pk, pk);
element_mul_zn(pk, pk, pk_temp);
When I run this program(ci has a value from earlier calculations), this is the output that I get :
pk_temp
[116278406325033872100813200201193617766578695181932793603160637278854742066192092884782310233584512588249536578523628847229234460071209045611450183651531, 2021454548422679707182594138446448992982063147118097540714810473185383559710078393323207940613550542761869670939719707936590719813436809712827363459486303]
ci
557018308384393102708847545615423648196401851115
After pk_temp^ci
pk_temp
[108256843552655255908398113161102639677286937761571877242159361199581616450078081163742350174144405610156719380747507503987165257266343606269839543701390, 315460454942637140235438889718432767280220200962474346118962958364243678526968323118117335000004382683381426774251553048778733132443252812268528626451784]
After pk = pk + pk
pk
0
After pk = pk * pk_temp
pk
O
UPDATE
If pk is initialized as an element in Zr, the addition works.
element_mul_zn(pk, pk, pk_temp);
But in fact, your pk_temp is in G2 rather than Zn.
See the definition in include/pbc_field.h
static inline void element_mul_zn(element_t c, element_t a, element_t z) {
mpz_t z0;
PBC_ASSERT_MATCH2(c, a);
//TODO: check z->field is Zn
mpz_init(z0);
element_to_mpz(z0, z);
element_mul_mpz(c, a, z0);
mpz_clear(z0);
}
The code does not check if z->field is Zn and will just converts G2 to mpz.
It does not make sense if you want to convert G2 to mpz, and element_to_mpz just gives you 0.
I'm not sure which type of pairing do you use, so I suppose you use type A pairing. The function
element_set1(pk);
sets pk to O, the point at infinity, since pk is an element of G2. So the result should look like this
element_set1(pk);
pk = O
After pk=pk+pk
pk = O
After pk = pk * pk_temp
pk
O
the result of pk*pk_temp is O because for all scalar number c, O = c * O.
We generally do not accept code-review questions here; perhaps stackexchange (not crypto) would be better.
On the other hand, I do notice that you do:
element_set1(pk);
Would that set pk to 1? If so, why would it be surprising that pk would be set to 2 after you add pk to itself?
Warning: I know very little about database collations so apologies in advance if any of this is obvious...
We've got a database column that contains urls. We'd like to place a unique constraint/index on this column.
It's come to my attention that under the default db collation Latin1_General_CI_AS, dupes exist in this column because (for instance) the url http://1.2.3.4:5678/someResource and http://1.2.3.4:5678/SomeResource are considered equal. Frequently this is not the case... the kind of server this url points at is case sensitive.
What would be the most appropriate collation for such a column? Obviously case-sensitivity is a must, but Latin1_General? Are urls Latin1_General? I'm not bothered about a lexicographical ordering, but equality for unique indexes/grouping is important.
You can alter table to set CS (Case Sensitive) collation for this column:
ALTER TABLE dbo.MyTable
ALTER COLUMN URLColumn varchar(max) COLLATE Latin1_General_CS_AS
Also you can specify collation in the SQL statement:
SELECT * FROM dbo.MyTable
WHERE UrlColumn like '%AbC%' COLLATE Latin1_General_CS_AS
Here is a short article for reference.
The letters CI in the collation indicates case insensitivity.
For a URL, which is going to be a small subset of latin characters and symbols, then try Latin1_General_CS_AI
Latin1_General uses code page 1252 (1) and URL's allowed characters are included on that code page(2), so you can say that URLs are Latin1_General.
You just have to select the case sensitive option Latin1_General_CS_AS
rfc3986 says:
The ABNF notation defines its terminal values to be non-negative
integers (codepoints) based on the US-ASCII coded character set
[ASCII].
Wikipedia say that allowed chars are:
Unreserved
May be encoded but it is not necessary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
a b c d e f g h i j k l m n o p q r s t u v w x y z
0 1 2 3 4 5 6 7 8 9 - _ . ~
Reserved
Have to be encoded sometimes
! * ' ( ) ; : # & = + $ , / ? % # [ ]
It seems that they are not conflicts between this chars in compare operations. Also, you can use HASHBYTES function for make this comparation.
But this kind of operation is not the major problem. Major problem is that http://domain:80 and http://domain may be the same. Also with encoded characters, a url may seems different with encoded chars.
In my opinion, RDBMS will incorporate this kind of structures as new data types: url, phone number, email address, mac address, password, latitude, longitude, ... . I think that collation can helps but don't will solve this issue.