I want to add array in database. I use ruby, sequel and postgresql, but question about true way, and not program realization.
I see 3 path:
Use special adapters. It help to save array like [1,2,3,4] in db after some changes: in database it will look like "{1,2,3,4}". BUT after it I don't know how to convert this entry back to [1,2,3,4] without bullshits.
Convert array to json and save it to database. Array like [1,2,3,4] will be look like "[1,2,3,4]". And I can easy convert back by JSON.parse.
Convert array by Marshal and save. Array [1,2,3,4] will be look like "\x04\b[\ti\x06i\ai\bi\t", but it riskily, because data can be changed after ruby update (or I am wrong)
Can anybody to tell about true way?
Sequel supports Postgres Array columns natively, assuming you don’t need to normalize yourschema further.
How do I define an ARRAY column in a Sequel Postgresql migration?
http://sequel.jeremyevans.net/rdoc-plugins/files/lib/sequel/extensions/pg_array_rb.html
IMHO, The clean approach would be an extra table. Ideally, each row in the DB represent a single observation, and having an array or json column most probably contradicts that. I suggest looking at that extra table as good design, not as an overkill.
Make use of serialize provided by ActiveRecord
class User < ApplicationRecord
serialize :preferences, Array
end
#In migration Add a string field
class AddSerializePreferencesToUsers < ActiveRecord::Migration[5.1]
def change
add_column :users, :preferences, :string
end
end
#In rails console.
u = User.first
u.preferences # => []
u.preferences << 'keep coding'
u.preferences # => ['keep coding']
u.save
u.reload.preferences # ['keep coding']
Hope this helps
Related
I have an object Persons which is an ActiveRecord model with some fields like :name, :age .etc.
Person has a 1:1 relationship with something called Account where every person has an account .
I have some code that does :
Account.create!(person: current_person)
where current_person is a specified existing Person active record object.
Note : The table Account has a field for person_id
and both of them have has_one in the model for each other.
Now I believe we could do something like below for bulk creation :
Account.create!([{person: person3},{person:: person2} ....])
I have an array of persons but am not sure of the best way to convert to an array of hashes all having the same key.
Basically the reverse of Convert array of hashes to array is what I want to do.
Why not just loop over your array of objects?
[person1, person2].each{|person| Account.create!(person: person)}
But if for any reason any of the items you loop over fail Account.create! you may be left in a bad state, so you may want to wrap this in an Active Record Transaction.
ActiveRecord::Base.transaction do
[person1, person2].each{|person| Account.create!(person: person)}
end
The create method actually persists each hash individually, as shown in the source code, so probably it's not what you are looking for. Either way the following code would do the job:
Account.create!(persons.map { |person| Hash[:person_id, person.id] })
If you need to create all records in the same database operation and are using rails 6+ you could use the insert_all method.
Account.insert_all(persons.map { |person| Hash[:person_id, person.id] })
For previous versions of rails you should consider using activerecord-import gem.
# Combination(1).to_a converts [1, 2, 3] to [[1], [2], [3]]
Account.import [:person_id], persons.pluck(:id).combination(1).to_a
I'm using postgresql database which allows having an array datatype, in addition django provides PostgreSQL specific model fields for that.
My question is how can I filter objects based on the last element of the array?
class Example(models.Model):
tags = ArrayField(models.CharField(...))
example = Example.objects.create(tags=['tag1', 'tag2', 'tag3']
example_tag3 = Example.objects.filter(tags__2='tag3')
I want to filter but don't know what is the size of the tags. Is there any dynamic filtering something like:
example_tag3 = Example.objects.filter(tags__last='tag3')
I don't think there is a way to do that without "killing the performance" other than using raw SQL (see this). But you should avoid doing things like this, from the doc:
Tip: Arrays are not sets; searching for specific array elements can be
a sign of database misdesign. Consider using a separate table with a
row for each item that would be an array element. This will be easier
to search, and is likely to scale better for a large number of
elements.
Adding to the above answer and comment, if changing the table structure isn't an option, you may filter your query based on the first element in an array by using field__0:
example_tag3 = Example.objects.filter(tags__0='tag1')
However, I don't see a way to access the last element directly in the documentation.
Say I have the mtcars dataset, and I wanted to take three columns and turn them into a JSON array. How do I convert this to a json array and is that possible to pass them into a POSTGRESQL database?
library(jsonlite)
df <- mtcars
attach(mtcars)
json.column <- cbind(mpg,cyl,disp)
Do I use toJSON() ?
mtcars.json <- toJSON(json.column)
https://cran.r-project.org/web/packages/jsonlite/vignettes/json-aaquickstart.html
Array of objects [{"name":"Erik", "age":43}, {"name":"Anna", "age":32}] Data Frame simplifyDataFrame
Keep your data as a data.frame, not a matrix. Use
json.column <- data.frame(mpg,cyl,disp)
toJSON(json.column)
# [{"mpg":21,"cyl":6,"disp":160},{"mpg":21,"cyl":6,"disp":160}, ...
Also, you should avoid the use of attach(). It can cause lots of problems if you forget detach(). Plus you can use with() often to avoid it
json.column <- with(mtcars, data.frame(mpg,cyl,disp))
(For starters, never use attach! It's dangerous! Use with instead, typically.)
There are a bunch of ways to do it. Here's how to create the values using dplyr:
qq <- rowwise(mtcars) %>%
mutate(newcol=as.character(jsonlite::toJSON(list(mpg=mpg, cyl=cyl, disp=disp))))
> qq$newcol
[1] "{\"mpg\":[21],\"cyl\":[6],\"disp\":[160]}" "{\"mpg\":[21],\"cyl\":[6],\"disp\":[160]}"
[3] "{\"mpg\":[22.8],\"cyl\":[4],\"disp\":[108]}" "{\"mpg\":[21.4],\"cyl\":[6],\"disp\":[258]}"
...
From there, if your Postgres database is set up with newcol as a JSON type, I think just writing that table as usual should work.
I have a mongo collection with documents that have a schema structured like the following:
{ _id : bla,
fname : foo,
lname : bar,
subdocs [ { subdocname : doc1
field1 : one
field2 : two
potentially_huge_array : [...]
}, ...
]
}
I'm using the ruby mongo driver that currently does not support elemMatch. I use an aggregation when extracting from subdocs via a project, unwind and match pipeline.
What I would now like to do is to page results from the potentially_huge_array array contained in the subdocument. I have not been able to figure out how to grab just a subset of the array without dragging the entire subdoc, huge array and all, out of the db into my app.
Is there some way to do this?
Would a different schema be a better way to handle this?
Depending on how huge is huge, you definitely don't want it embedded into another document.
The main reason is that unless you always want the array returned with the document, you probably don't want to store it as part of the document. How you can store it in another collection would depend on exactly how you want to access it.
Reviewing the types of queries you most often perform on your data will usually suggest the best schema - one that will allow you to be efficient about number of queries, the amount of data returned and ease of indexing the data.
If you field really huge and changes often, just placed it in separate collection.
I'd like to store some trivial values for each user in the database, like if the user can see the new comers' banner, the instruction on how to use each of the features etc. The number of values can increase as we come across new ideas.
So I've thought about two solutions for storing these data. Either having a field for each of these values (So the structure of the table will change a few times at least), or have one field for all these types of data, so they're stored as a dictionary in the field (In this case, I'm worried about if it's hurting db performance, I also need to write more logics for parsing the dictionary in string and the way around, and if storing dictionaries in db contradicts with what db does).
models.py
class Instruction(models.Model):
user=models.ForeignKey('auth.User')
can_see_feature_foo_instruction=models.BooleanField()
can_see_feature_bar_instruction=models.BooleanField()
...
or
class Instruction(models.Model):
user=models.ForeignKey('auth.User')
instruction_prefs=models.CharField() #Value will be "{'can_see_foo_inst':True, 'can_see_bar_inst':False, ...}"
Which will be the best solution?
It depends if you need to be able to search on these fields. If so, the text field option is not really suitable, as the individual flags won't be indexed. But if not, then this is a perfectly good way of going about it. You might want to consider storing it as JSON, which is useful as a method of serializing dicts objects to text and getting them back. There are plenty of implementations around of "JSONField" in Django that will take of serializing/deserializing the JSON for you.
Django has a built-in permission system. Try reading this link https://docs.djangoproject.com/en/dev/topics/auth/#permissions
Update
I think if you really want to use an Instruction model. You can use something like a JSONField and use it to store instructions. This way you can do something like instruction.key to access a value. You can try using this. https://github.com/derek-schaefer/django-json-field
You can create model for key value pair of instructions/permissions per user.
E.g.
class Instruction(models.Model):
user=models.ForeignKey('auth.User')
key = models.CharField(max_length=20)
value = models.BooleanField()
Then you can create multiple instances of this for each user depending upon permissions he has.
>>>> instr1 = Instruction()
>>>> instr1.user = user1
>>>> instr1.key = 'can_see_feature_foo'
>>>> instr1.value = True
>>>> instr1.save()
>>>> instr2 = Instruction()
>>>> instr2.user = user1
>>>> instr2.key = 'can_see_feature_bar'
>>>> instr2.value = True
>>>> instr2.save()
....
#To query
>>>> Instructions.filter(user=user1, key='can_see_feature_bar')
If you use a Model with a CharField to store the instruction and a ManyToManyField to the users you can create and assign any number of instructions to any number of users.
class Instruction(models.Model):
user = models.ManyToManyField('auth.User')
instruction = models.CharField() # Value will be a single instruction