I'm using this in an entity definition:
#PrimaryGeneratedColumn('uuid')
Id: string;
and getting UUIDs like:
C17D188A-E91E-EC11-AAF7-0AB75295BBB4
In terms of the first character of the 3rd group of characters here, 'E' - this sould be UUID v14 - which doesn't exist? How does TypeORM generate a UUID?
To answer the question, TypeORM leverages a RFC4122 compliant UUID v4 generator to generate a string for uuid-type columns, as seen here.
Regardless, the #PrimaryGeneratedColumn('uuid') decorator maps the column to a uuid database field type if the database supports it. It could be that your database engine is most likely not generating uuidv4-compliant UUIDs.
Related
I'm making a simple CRUD application with MongoDB so I can learn more about it.
The application is a simple blog, I have a collection named "articles" which stores various documents, each one representing a post for my blog.
When I display the list of all blog posts, I can do a db.collection.find(), and list all of them.
But the question lies when I need to show a single post individually, when I need to query the collection for a single, specific document.
The logical solution would be to use a RDBMS and an auto increment feature, but MongoDB is NoSQL and does not have auto increment.
I'm using the auto generated _id field of the document which stores an ObjectId by default, which means that my url's look like this:
http://localhost/blog/article.php?_id=5d41f6e5fc1a2f3d80645185
I saw in the documentation that the ObjectId contains a unique identifier for the server, together with a timestamp and a counter, isn't exposing these things a security risk?
As a solution, I stumbled into UUID https://docs.mongodb.com/manual/reference/method/UUID/ which is an auto-generated unique ID, that doesn't expose timestamp and machine info in it. It seems like a logical solution to use this instead of the _id that contains my ObjectId for querying and finding a document.
So I can make my url's look like this:
http://localhost/blog/article.php?_id=23829651-26f7-4092-99d0-5be8658c966e
But still, should I keep the _id property? should I add another one called "id" that stores the UUID? should I even use UUID's at all?
Here's what I would consider before choosing an identifier:
Collision
Risk of collision is very low for both UUIDs and ObjectIDs. This has been discussed in detail in another question.
Nature
UUIDs are random whereas ObjectID values always increase over time. This makes ObjectIDs a bad choice for sharding.
Other uses
ObjectIDs have the creation timestamp as a part and can be used as a substitute of commonly used the createdAt field. A sort by ObjectIDs is a sort by creation time.
Insecure object references (OWASP)
Short def: An attacker cannot deduce the ID of another object if they have the ID of one object. You can read more about this here. Both UUIDs and ObjectIDs are not vulnerable to this.
Link to another question that discusses the security of ObjectIDs (thanks zbee).
Ease of use
Note: This is subjective
Using ObjectIds is a lot easier in the Mongo ecosystem. The existence of speical aggregation operators to deal with ObjectIDs + libraries add to it.
Portability
UUIDs are more portable than ObjectIDs. I do not know of any other system that uses ObjectIDs internally except for Mongo. Whereas there are other DBs such as Postgres that have a special data type for UUIDs + extensions for random generation etc.
I would like to migrate the database from MySQL to MemSQL. Original database use UUID as ID generated by function UUID(). Is there any possibility, how to use a similar function for generating those ID's?
MemSQL does not have a built-in UUID() function, but you can generate unique ids in various ways, depending on what you need them for, such as:
Generate random hashes. You can do this by e.g. SHA1(RAND()), or to get 16 bytes of randomness CONCAT(SUBSTRING(SHA1(RAND()), 1, 16), SUBSTRING(SHA1(RAND()), 1, 16)).
Use auto_increment to generate ids unique within a table
Generate UUIDs in the application side
If you need them to follow the UUID format, you can reformat them with string functions
imagine this kind of db
Authors(id, author)
Publication(id, authorID, Title, Year....)
What is the best way to proceed string search queries f.e. "2001 Smith Theory of Evolution", I mean not in particular case, but in general: searching records not by 1 column?
For a simple/quick solution:
Consider creating a new (fulltext indexed) terms column on your Publication table which will receive every text string of interest to search (e.g. author name, pubdate, title).
Then add a MATCH/AGAINST clause to your query (or to_tsquery() for Postgres).
Postgres doc: http://www.postgresql.org/docs/9.4/static/textsearch-tables.html
MySQL doc: https://dev.mysql.com/doc/refman/5.7/en/fulltext-search.html
If you find that you need finer control over search relevance, or stock search features like facets and autocomplete, then consider deploying Solr or Elasticsearch as an external index to your database.
Sorry, this question seems stupid, but I tried 1 hour searching and didn't find anything.
So I'm using liquibase for multiple databases(e.g. MSSQL, Oracle and MySQL), when I say:
addColumn(tableName: "ABC_TEST") {
column(name: "IS_ACTIVE", type: "boolean")
}
How do I know if the type "boolean" will be converted to proper types for each database?
And is there any documentation I can find for the data type mapping? If I want to add one more column which is foreign key, which type should I use?
Checkout this question (and answers) to see the available types that liquibase offers.
In my answer to that question there is a link to the relevant liquibase classes that do the translation to the db specific types.
When you created the table that has the primary key with a "liquibase type" then liquibase will have translated this to the db specific type.
Then your foreign key should just use the same type and liquibase will translate this likewise.
E.g. check out the class BigIntType.
With liquibase you would just use the "liquibase type": BIGINT.
On Oracle DBs it will translate to ("NUMBER", 38,0).
On MSSQL it will translate to ("BIGINT").
Liquibase has a concept of database dialects, much like Hibernate's. It uses these to know how to generate the correct DDL statements to add/change/delete columns, foreign keys, etc. When it connects to the database it uses the JDBC metadata to determine which database type you're using, and uses that to determine the correct dialect.
I have an active record class
class Service_Catalogue < ActiveRecord::Base
set_table_name "service_catalogue"
set_primary_key "myKey"
end
myKey is an nvarchar (sql server).
When I try and save it
service_catalogue= Service_Catalogue.new()
service_catalogue.myKey = "somevalue"
service_catalogue.save
I get the following error:
IDENTITY_INSERT could not be turned OFF for table [service_catalogue] (ActiveRecord::ActiveRecordError)
It seems like ActiveRecord thinks that the primary key is an identity column (it's not its a varchar) and so is failing.
Is there a way to tell it not to try and turn off identity insert?
UPDATE
It turns out the version of the active record sql server adapter was to blame.
I had previously been using 1.0.0.9250 but somehow 2.2.19 got installed (I presume when doing a gem update). After reverting to the old version it works fine.
ActiveRecord likes to have integer ID columns and throws a fit if this is not the case. It can be coached to deal with alternatives, but this depends on your environment.
An alternative is to have a conventional integer-based ID field, and a unique "key" field that is used for lookups.
class Service_Catalogue < ActiveRecord::Base
set_table_name "service_catalogue"
validates_uniqueness_of :myKey
end
If it's possible, it might be more convenient to alter your schema to fit ActiveRecord than to coach ActiveRecord to deal with your schema. If not, you might need to really get down and dirty.
Another approach could be to use an alternative ORM like DataMapper which does not have the same limitations.
The particular problem I was experiencing was caused because of an inadvertent gem upgrade (thankfully using bundler now this does not happen!).
The old active record sql server adapter (1.0.0.9250) supports what I was trying to accomplish.