I used OrmLite to map Java objects in my Google App Engine application to a bunch of database tables (MySQL). Is there a way to automatically create the tables on Google's Cloud SQL or a similar cloud based SQL service instead of having to manually create the tables myself.
OrmLite's documentation does not cover this, neither does Google App Engine's.
Any pointer in the right direction is highly appreciated.
Is there a way to automatically create the tables ...
OrmLite's documentation does not cover this ...
I'm not sure if you are talking specifically about cloud SQL but ORMLite certainly has a ton of documentation about creating tables in general.
TableUtils is the class that supports (uh) table utility methods like create and delete. Here's the javadocs.
In the Getting Started section it talks about using TableUtil to create the schema in the Code Example. More details in the How To Use
In the documentation index I see entries for: "creating a table" and "table creation".
To quote from the sample code from the Getting Started docs.
// instantiate the dao
Dao<Account, String> accountDao =
DaoManager.createDao(connectionSource, Account.class);
// if you need to create the 'accounts' table make this call
TableUtils.createTable(connectionSource, Account.class);
TableUtil also has a method that returns the SQL statements that do the create.
Related
Is there a way to create database tables dynamically in Laravel. I have one Laravel build which has a database schema for quotes using the migrations tool. There will be several customers using the system which need to each have their own database table.
What I would like to happen is that when a function is called by the customer it will use the quotes schema to create a new table like 'customer1_quotes' and use this table for the customer in future. Additionally when migrations are run it will apply the updates to all tables with the given name structure (*_quotes).
If anyone has details to achieve this or a recommend alternative approach please message :)
Create a trait used by observers
Create a trait which loops through your customers and creates/updates the tables. You don't have to be in a migration to call DB::.
Use the trait in create/update controllers or better for model create/update observers. You could also create a console command for manual triggering or testing.
This should not be executed during maintenance. Using php artisan down should ensure no jobs are run during migrations.
The migrations for the customer{id}_quotes tables can loop through the available tables by querying table names using LIKE and/or REGEXP. See link below.
Links
Laravel Model Observers
How to dynamically set table name in Eloquent Model
Laravel's table Blueprint docs (5.8)
Get table names using LIKE or REGEXP
Optimization: Chunking results when getting query builder results
Edit: A repeatable migration probably won't work well and is confusing to others. Using a trait for flexibility to use for an observer is better for this.
To give you the question first: I want to know if it is possible to create a stored procedure or something in SQL Server that intercepts and translates SELECT, INSERT, and UPDATE commands. Now for the explanation:
I am writing a web application to replace an old desktop app. Its a business app which is basically a database interface with reports and searches and all the good ol' CRUD. The new and old apps need to live in harmony together since some customers may be using the old and new together to access the same DB.
My problem is that the original database format stores most data in a single blob of text (1 nvarchar(MAX) field). I want to add functionality to search on fields stored in the blob, but it will be cumbersome and slow. I would like to update the database format without changing the desktop app at all, hence the question above.
It occurs to me that I could do this on the client by writing a wrapper class for the data access object and then do a bulk replace in the client code to reference the wrapper, but I want to know what my options are on the server as well.
In case anyone wants to know, the old app is in VB6 and the new in C#.
EDIT
Alright, so it looks like if I do anything on the server side we are looking at adding stored procedures and then updating the client VB6 code to reference the stored procs. Do something like a bulk replace of SELECT with sp_oldselect ... To return the data in a different format. I'm guessing a client-side wrapper would be the best solution for the time-being. Old apps die hard.
You can create a bunch of views for the old client and let it to query those views. It will be slow as hell in most cases, but it can 'replace' the select query. For updates and insert.. well.. instead of triggers on the views could help is some cases, but it will require lots of processing.
However my suggestion is to provide exactly the same functionality in the web app and deprecate the desktop app. When the desktop app's share is low enough, stop supporting it. From this point, you are (mostly) free to add new functions, upgrade the database schema, etc.
I agree with JonH, that alot can go wrong here, but you can try and read up on the INSTEAD OF Triggers in MS SQL server here: https://technet.microsoft.com/en-us/library/ms179288(v=sql.105).aspx
Hi i am a newbie to programming. I have 100 or so CT scans stored on a PACS (dcm4che). I am trying to link all patients to a teaching file database (simple django application) which will have teaching points on each case. Can someone direct me to a tutorial or a brief direction to what sort of programming will integrate the two? i do realise the generic nature of the question. I have 20 days to work on this so am willing to start from scratch
Thanks
I would recommend against anything specific as diving into dcm4che. Instead if you would like to use standard API, you should use the QIDO-RS/WADO API provided by dcm4chee.
One of the main author did also document how to install such instance here
I would suggest to bind yiur django app or project to the appropriate dcm4chee databese. Since you have all priviliges just create a new table within dcm4chee database which has a simple foreign key column named SOPinstanceUid. In this column you just store the SOPinstanceUId of your preferred images. Then of course you can additionally provide all columns you need for your teaching problem.
You can of course also create a seperate database and bind your django app to both databases and use the SOPInstance uid also as main key to establish the relationship between dcm4chee db and teaching db.
Within your django app you can then of course manage your teaching table or db and query the filenames of the images which you have selected for teaching.The key of this relationship is the SOPInstanceUID of the dicom image.
This approach just needs some expertise in SQL, some knowledge of the preconfigured database and of course django and DICOM.
I have a Django project with multiple apps. They all share a db with engine = django.db.backends.postgresql_psycopg2. Now I want some functionality of GeoDjango and decided I want to integrate it into my existing project. I read through the tutorial, and it looks like I have to create a separate spartial database for GeoDjango. I wonder if there is anyway around. I tried to add this into one of my apps' models.py without changing my db settings :
from django.contrib.gis.db.models import PointField
class Location(models.Model):
location = PointField()
But when I run syncdb, I got this error.
File "/home/virtual/virtual-env/lib/python2.7/site-packages/django/contrib/gis/db/models/fields.py", line 200, in db_type
return connection.ops.geo_db_type(self)
Actually, as i recall, django.contrib.gis.db.backends.postgis is extension of postgresql_psycopg2 so you could change db driver in settings, create new db with spatial template and then migrate data to new db (South is great for this). By itself geodjango is highly dependent on DB inner methods thus, unfortunately, you couldn't use it with regular db.
Other way - you could make use of django's multi-db ability, and create extra db for geodjango models.
Your error looks like it comes from not changing the database extension in your settings file. You don't technically need to create a new database using the spatial template, you can simply run the PostGIS scripts on your existing database to get all of the geospatial goodies. As always, you should backup your existing database before doing this though.
I'm not 100%, but I think that you can pipe postgis.sql and spatial_ref_sys.sql into your existing database, grant permissions to the tables, and change the db setting to "django.contrib.gis.db.backends.postgis". (After you have installed the deps of course)
https://docs.djangoproject.com/en/dev/ref/contrib/gis/install/#spatialdb-template
I'd be interested to see what you find. Be careful, postgis installation can build some character but you don't want it to build too much.
From the docs (django 3.1) https://docs.djangoproject.com/en/3.1/ref/databases/#migration-operation-for-adding-extensions :
If you need to add a PostgreSQL extension (like hstore, postgis, etc.) using a migration, use the CreateExtension operation.
I'd like to know your approach/experiences when it's time to initially populate the Grails DB that will hold your app data. Assuming you have CSVs with data, is is "safer" to create a script (with whatever tool fits you) that:
1.-Generates the Bootstrap commands with the domain classes, run it in test or dev environment and then use the native db commands to export it to prod?
2.-Create the DB's insert script assuming GORM's version = 0 and incrementing manually the soon-to-be autogenerated IDs ?
My fear is that the second approach may lead to inconsistencies for hibernate will have the responsability for the IDs generation and there may be something else I'm missing.
Thanks in advance.
Take a look at this link. This allows you to run groovy scripts in the normal grails context giving you access to all grails features including GORM. I'm currently importing data from a legacy database and have found that writing a Groovy script using the Groovy SQL interface to pull out the data then putting that data in domain objects appears to be the easiest thing to do. Once you have the data imported you just use the commands specific to your database system to move that data to the production database.
Update:
Apparently the updated entry referenced from the blog entry I link to no longer exists. I was able to get this working using code at the following link which is also referenced in the comments.
http://pastie.org/180868
Finally it seems that the simplest solution is to consider that GORM as of the current release (1.2) uses a single sequence for all auto-generated ids. So considering this when creating whatever scripts you need (in the language of your preference) should suffice. I understand it's planned for 1.3 release that every table has its own sequence.