Access to database and importing data to Magento - database

I am considering building some sort of mechanism, that would import data to Magento database.
However, as I read in documentation, it was recommended to use models available in Magento by default if possible.
My question would be, whether it is possible to use model approach, without creating Magento module and then execute this code from command line?
Or the best idea would be to use module, but what if I would intend to build two import mechanisms, where one uses custom table (perhaps I may need one more table for one customization, but this table would stand apart) and another uses tables and models available in Magento?

To bootstrap Magento and use it from command line create a php file starting with:
<?php
require_once '../Mage.php'; //correct path to Mage.php
$app = Mage::app();
Mage::register('isSecureArea', true);

Related

SYMFONY 6 - DOCTRINE : mapping and import only some table from an existing databse

I'm actually looking to map and import an existing database into a symfony 6 project.
I know we can do this by using this command :
php bin/console doctrine:mapping:import "App\Entity" annotation --path=src/Entity
But, this database is very huge and have a lot of tables. I don't want them all.
Do you know a way to "select" the tables i want to map. I know the tables that i don't want start with " _ " or " inv_ ". Perhaps there is a way to have a "where" clause ?
There is a --filer= option in doctrine:mapping:import but I think it's not what you're looking for.
If you migrate your codebase to symfony and doctrine and doctrineORM wasn't used before - it would be much easier just to start over and do all by your self. Yes, it's tedious, especially with a huge database, but you will end up with much "cleaner" entities and during this phase you could decide which tables to ignore.
But if you still want try to "import" somehow, consider following steps:
In your local developer environment, create a copy of your database. Just the schema without data. So you have all tables, but they're empty (how to with mysqldump Don't forget to use --no-data)
drop all other tables which you don't want
rename some, if you think, their names wouldn't fit to doctrine's naming convention.
switch to that copy-database in your .env (change db name in DATABASE_URL)
Now try to import again with doctrine:mapping:import. You may need to adjust some tables by repeating step 2) and 3) and then try to import again.
If import succeed, and you have a bunch of entities, now comes the boring and tedious part. You have to manually check all classes in src/Entity.
Depending on your Database (mysql, postgreSQL, sqlite, etc) not all column-types will be exact what you want.
Furthermore, most many-to-one/one-to-many relations and all many-to-many junction tables will be probably converted to standalone Entities like src\CategoryToProduct.php - which isn't right. So you have to delete them and recreate your relations by hand
If you happen to have a diagram of your database done with MySQL Workbench, you can use these tools to export your diagram into working entities :
First you setup this with composer: https://github.com/mysql-workbench-schema-exporter/mysql-workbench-schema-exporter
And then this : https://github.com/mysql-workbench-schema-exporter/doctrine2-exporter
Then you run the following command to get to know more how to use it :
php composer require --dev mysql-workbench-schema-exporter/doctrine2-exporter
You have plenty of options to parameter desired output, entities namespace and so on, everything is described in the second github rep if you want to use the doctrine2 ORM export format.

Clone database or run sql dump on the fly in laravel 4

I have a big project in laravel 4, and i need to make new database with lot of tables on the fly. I can make empty database on the fly, but how to fill it up with tables? I cant use migrations on the fly, because this application have lot of permissions, so user cant pass them.
First option is to clone empty database (empty from data but with tables).
Second option is run sql dump file on the fly.
Can someone explain me how to do that?
Laravel provides an option to fill the database through seeding . check the docs .
Laravel also includes a simple way to seed your database with test
data using seed classes. All seed classes are stored in
app/database/seeds. Seed classes may have any name you wish, but
probably should follow some sensible convention, such as
UserTableSeeder, etc. By default, a DatabaseSeeder class is defined
for you. From this class, you may use the call method to run other
seed classes, allowing you to control the seeding order.
after defining the seed class . you can seed the database through the command
php artisan db:seed
check this blog post on how to use seeing and also faker library to prefill databased with real like test data.
if you can't use command line. you can call the seed command from your code . check this thread .

Is there any way to create migrations in beego?

I haven't found in documentation anything except "syncdb" command which create database tables from scratch. Is there any command to create and run migrations based on ORM model? Like in django? Add field, change type, etc.
No, orm.RunSyncdb(name, force, verbose) and it's command line equivalent only do a small subset of what tools like django's south can do.
Beego's orm can:
Create new tables from scratch
Drop all tables (force = true)
Add new columns as you extend your model
You need to handle dropping columns and any changes to the column parameters used to initially create the table.
Sadly beego doesn't include this feature, but no framework in go (as of today) does.
Instead they all relay that to other libraries to handle.
What you can do however is use goose for migrations:
https://bitbucket.org/liamstask/goose
or any other migration library as discussed in the following thread:
http://www.reddit.com/r/golang/comments/2dlbz5/database_migration_handling_in_go/
Remember that due to the modularity of beego you can also use any another orm (like gorm).
Feel free to look for : avelino/awesome-go in google if you want a list of tools/libs around the go ecosystem.
Yes, you can create migrations in beego now. Example, If you need to create a new table, you can start by creating a new migration file using the bee tool:
bee generate migration create_user_table
This command will create a file inside database/migrations folder. The file name contains the date, time and name of the migration.
For further details you can check this article https://ncona.com/2017/10/database-migrations-in-beego

Integrating GeoDjango into existing Django project

I have a Django project with multiple apps. They all share a db with engine = django.db.backends.postgresql_psycopg2. Now I want some functionality of GeoDjango and decided I want to integrate it into my existing project. I read through the tutorial, and it looks like I have to create a separate spartial database for GeoDjango. I wonder if there is anyway around. I tried to add this into one of my apps' models.py without changing my db settings :
from django.contrib.gis.db.models import PointField
class Location(models.Model):
location = PointField()
But when I run syncdb, I got this error.
File "/home/virtual/virtual-env/lib/python2.7/site-packages/django/contrib/gis/db/models/fields.py", line 200, in db_type
return connection.ops.geo_db_type(self)
Actually, as i recall, django.contrib.gis.db.backends.postgis is extension of postgresql_psycopg2 so you could change db driver in settings, create new db with spatial template and then migrate data to new db (South is great for this). By itself geodjango is highly dependent on DB inner methods thus, unfortunately, you couldn't use it with regular db.
Other way - you could make use of django's multi-db ability, and create extra db for geodjango models.
Your error looks like it comes from not changing the database extension in your settings file. You don't technically need to create a new database using the spatial template, you can simply run the PostGIS scripts on your existing database to get all of the geospatial goodies. As always, you should backup your existing database before doing this though.
I'm not 100%, but I think that you can pipe postgis.sql and spatial_ref_sys.sql into your existing database, grant permissions to the tables, and change the db setting to "django.contrib.gis.db.backends.postgis". (After you have installed the deps of course)
https://docs.djangoproject.com/en/dev/ref/contrib/gis/install/#spatialdb-template
I'd be interested to see what you find. Be careful, postgis installation can build some character but you don't want it to build too much.
From the docs (django 3.1) https://docs.djangoproject.com/en/3.1/ref/databases/#migration-operation-for-adding-extensions :
If you need to add a PostgreSQL extension (like hstore, postgis, etc.) using a migration, use the CreateExtension operation.

Best strategy to initially populate a Grails database backend

I'd like to know your approach/experiences when it's time to initially populate the Grails DB that will hold your app data. Assuming you have CSVs with data, is is "safer" to create a script (with whatever tool fits you) that:
1.-Generates the Bootstrap commands with the domain classes, run it in test or dev environment and then use the native db commands to export it to prod?
2.-Create the DB's insert script assuming GORM's version = 0 and incrementing manually the soon-to-be autogenerated IDs ?
My fear is that the second approach may lead to inconsistencies for hibernate will have the responsability for the IDs generation and there may be something else I'm missing.
Thanks in advance.
Take a look at this link. This allows you to run groovy scripts in the normal grails context giving you access to all grails features including GORM. I'm currently importing data from a legacy database and have found that writing a Groovy script using the Groovy SQL interface to pull out the data then putting that data in domain objects appears to be the easiest thing to do. Once you have the data imported you just use the commands specific to your database system to move that data to the production database.
Update:
Apparently the updated entry referenced from the blog entry I link to no longer exists. I was able to get this working using code at the following link which is also referenced in the comments.
http://pastie.org/180868
Finally it seems that the simplest solution is to consider that GORM as of the current release (1.2) uses a single sequence for all auto-generated ids. So considering this when creating whatever scripts you need (in the language of your preference) should suffice. I understand it's planned for 1.3 release that every table has its own sequence.

Resources