Can I load solr schema file from another server? - solr

I am running solr on 5 different instance. Making change in schema/dataconfig file is a big task as I need to make changes on each server.
Can I load schema file from a server? So that same path can be defined in each solrconfig and changes will be reflected on each solr instance.

If you're running Solr on multiple instances, you should really consider moving to a cluster based installation (i.e. SolrCloud). This will give you a common schema across servers and allow you to easily create collections and make changes across all nodes in your network at the same time.
You can use a shared file system, but it'll still require you to access each server (which you can do through the core API if you want to automate it) to reload the core to make it pick up any changes to the schema.

Related

Is it possible to store custom metadata for SSAS objects within SSAS? (for versioning)

I am trying to implement a form of versioning for our company's SQL Analysis Services Databases.
At the moment we have a very simple drop and recreate each time we actually deploy using the SQLPS PowerShell module and XMLA, but this causes hindrances when having to reprocess large measure groups because of the database being recreated and we would like to where possible reduce the deployment window since this can impact reprocessing backlog of transactions after deployment is finished since the system needs to catchup again.
So thus i am trying to implement a form of versioning so that only when there is actually a schema or model change would the objects need to be dropped and recreated since our entire deployment process is automated then in those cases we will book a longer deployment slot.
I am trying to find out if there is any functionality that exists within SSAS itself which allows you to maybe store some text value perhaps like a version number of the databases which can then be correlated to whether we need to drop and recreate the SSAS database.
At the moment I could not find anything so my best bet so far is to rely on managing the version number of the database via its related sql database instance so then I use a tracking table within the SQL instance to check if the latest version of this release has already been deployed.
Does anyone know of any method where such custom metadata might be added to the SSAS objects other that trying to modify their names which i would like to avoid.
Does anyone know of something like this perhaps or has anyone dealt with a similar scenario and if yes, how did you approach it?
I would use Annotations. Almost all components of SSAS cube have Annotations property which is a collection of Annotation structures. It is a structure with Name property which is a key and string Value storing arbitrary string.
Good thing about annotations that its are stored in SSAS metadata and can be retrieved back from server once cube is deployed.

Can I apply changes across different databases on the same server using liquibase

We have a large enterprise system which has many databases on a single server (Sybase)
Developers will make a change in one db, script it, then maybe make a change in another db, add that to the list of scripts and so on.
Our release then runs through these scripts making changes to the objects in different databases in the same order.
Reading the Liquibase documentation, it seems like it would work if you applied all the changes to one db, then another, then another. Which wouldnt really work in our case as a change in one db may rely a change done earlier on another db and vice versa.
How could I use Liquibase to do the same?
You probably need to start looking at Datical DB (disclaimer: I work at Datical), which is a set of tools and extensions around Liquibase to handle these kinds of situations.
Alternatively, you could do something similar, writing your own tools to control Liquibase. Liquibase is controllable at several different levels - you could use a scripting tool to execute the Liquibase command line, or you could use Java or Groovy (or any other language that integrates with tools in the JVM) and use the Liquibase classes more directly.
Liquibase does not currently support connecting to multiple different databases. It is being considered for version 4.0.
If your databases are the same database instance but different schemas, you can use the schemaName attribute to target changeSets at different schemas from one changeSet to the next. You will need a single connection URL that has access to all of the schemas.
If your databases are different instances or not all accessible from a single connection URL, you can probably create custom change classes or extensions that allow you to run SQL against different connections, although that will not be as clean or easy as the schemaName option.

Syncing magento database froms development to production

I use git for version control. I have a development, staging and production environment. When I finish in development I push to staging for review by the client. When approved, I push changes from staging to production. That works fine as long as there is no database changes. What happens if I install modules via Magento connect on local development and it makes database modifications.
How would I push those changes up to the production server since the production server is always changing?
Edit:
I wrote two shell scripts. One that pulls the production database down to my development server, replaces base url with develpment url and updates my development db accordingly. It also leaves the production sql dump behind to be added to my git repo. I'm not really sure if it's beneficial to keep the raw dumps in source control but I'm going to try it out. The second scripts moves the development database up to staging and essentially performs the same operations as the first.
Now when it comes time to move to production I pull the updated production repo into the production server and allow magento to do it's thing. I also started using SQLYog recently and it has a database comparison wizard which will give me the differences in my development and production databases and allow me to merge the changes in selectively. It always creates a migration script that I added to source control as well. If anything goes wrong I can run the comparison to see if anything was missed.
Does this sounds like a decent workflow to you guys?
This is a common situation for developers. It's much easier to modify code and schema and be assured that all is well when there is a small codebase which is thoroughly understood and doesn't have too much flexibility for UI. Of course, this is not the case with Magento, which can be quite difficult to work into automated testing and continuous integration schemes. That said, there are some knowable, testable behaviors on which you can rely.
An Overview
When dealing with local development which is merged to production, one must be assured that the schema and data changes relevant to new or changed functionality are also applied when the filesystem is updated. This is actually how Magento itself works. Module configuration files can supply a version number and can configure setup resources. This information is used to enter into a schema & data modification workflow which results in version information being added to the database. It is the consistency between file-designated version number and database-registered version number that one can / the system can infer that the database is in the appropriate state given the files present.
This means that when the new/updated module files are merged to production and the necessary conditions are met (e.g. the config cache is invalid, etc.), the database upgrade should take place. Your (proper) concern is that this process might break based on remote server-level differences, remote data differences, etc. Without a tightly-regulated integration testing process, there is some overhead.
Plan of Attack: Pick the Right Strategy
The essential activity in this area is assessing the areas of module's impact on the database. This should be straightforward with any module which is worthy of being installed; check for any of the following:
A system.xml file
Existence of install/upgrade scripts in sql or data folders
Existence of custom setup resource class (configured under global/resources xpath)
Appropriate configuration XML (version number in module config node & a setup resource under global/resources xpath)
For 1, simply review the structure and know that its effects on the database will be limited to the core_config_data table, and generally only once an admin has saved values via the GUI (noting that 1. below applies as well).
For 2 & 3, review the scripts which are set to be run. These can be divided into three general areas:
1. Configuration settings - look for setConfigData() and deleteConfigData() calls
2. Table additions and edits (new tables, adding columns, etc.)
3. EAV-related changes and additions; look for EAV setup resources
4. Non-EAV data changes: installation of new data or modification of existing data
It's a matter of feel & intuition, but gauging the level of impact on the db will allow you to determine if you should clone production data down to local dev and test the setup workflow locally, verifying it works okay, then pushing to production and re-checking (backing up always!). If the changes are wide-ranging, it may be best to take the site offline so that you can ensure that you won't lose order or customer data if you need to revert after a botched upgrade.
You generally don't ever want to push data contained in a db from dev > prod. Your schema defs should be contained in Magento sql install scripts. If you do have actual new data you want to push up to prod, you'll have to do so on a case-by-case basis. You will most likely pull down from prod > dev to test out data and configuration before running the actual case on prod.
Case - 1:
If your production server has the same data (DB) which you have in the local, then just copy the database and files to the production server and do the the following:
1) Delete the content of the folder /var
2) Change the values of the file /app/etc/local.xml
There you can find your connection string data (database user, host and name).
3) Once you got your database uploaded, you need to make some changes.
Run this query:
SELECT * FROM core_config_data WHERE path = 'web/unsecure/base_url' OR path = 'web/secure/base_url';
you will get 2 rows. update these rows by Run this query
UPDATE core_config_data SET value = 'YOUR_NEW_LIVE_URL' WHERE path LIKE 'web/%/base_url';
That’s all.
Case - 2:
If you don't want to change the DB data's in production, then you need to install the modules via megento connect directly to the production server. And you can update the files which you have changed in Local.

LINQ to SQL and the DBML file - multiple database development

The way I develop may not be correct, any advice welcome.
At the moment I have a WPF application that uses a SQL2008 database. I have a copy of the database on a laptop and on my home machine. My application is versioned using SVN and I am obviously able go from the work laptop to the home machine and update/commit as required to ensure I am using the latest code for the application.
However the database is a different story in that any change I make I create a backup and then transfer the backup to the other machine etc. This way I get the data and the changes made on each system. In order to do this the database connection using a different connectionstring and I change a setting in my app to use a different connection based on my location.
I have now started to use LINQ to SQL and DBML files in my application, and finally getting to the question, I don't know how I can change the connectionstring it uses in code so it will use the correct database in the DBML.
Also, is there a better way to transfer the database so I don't need to do the backups and restores? The only reason why I have not versioned the Schema is because I am not sure how that would handle my data as this is key to my development, ie various environment settings etc are stored in the DB and brought through at runtime.
Your Statement:
I have now started to use LINQ to SQL and DBML files in my application, and finally getting to the question, I don't know how I can change the connectionstring it uses in code so it will use the correct database in the DBML.
Yes it's possible.
MYDataContext mycontext = new MYDataContext("Your Connection String");
There is a Constructor where you can chage the Connectionstring.
This is such a common problem, and I have never found a minimal and clean solution to it. How to keep all the values and variables and databases and source files in sync between machines?
Well SVN works great for the source files.
For the database, I TRY to just use one DB if we can get away with it. All the devs point to one machine that hosts the db, then we aren't wasting time with DB setup and merging. If that's not possible, then we usually just end up dumping the database when there is a change and distributing the .bak file around. You can try adding this file to SVN, and it works. you can even have the DB dump to a schedule so that SVN is always getting a new copy. But it's still too much work to keep restoring a db over and over. Perhaps you could hook in some scripting to SVN (we use Tortise for windows) and have a job that would do that automatically. That'd be nice.
For the config files - I do ASP.NET so I have web.config, connectionstrings.config, etc, I do one of two things - either just manually copy sections that need to be changed between machines and comment out the part that doesn't need to be used (clunky), or I've at times written ConfigurationSettings helper objects that diagnose a config key to decide what setting to use, based on the current machine name. eg:
Say my current machine is DEV1. The server is SERVER1. I'll have config keys with names like DEV1.connections.sqlserver and SERVER1.connections.sqlserver. In the code I'll use the helper method GetConfig("connections.sqlserver"). GetConfig figures out which key to use based on the current machine name.
Using this method, I don't have to keep remembering to monkey around with the dozen .configs every time I upload to the server or change things. But I DO have to make a duplicate key for every machine that will be running the application, which can get a bit much. For large teams, instead of using machine names, I use group names and have a config key that assigns machine names to a group - with the idea that every machine in the group will have that application set up in an identical fashion - same file paths etc.
Now onto your second question about LINQ - when you create a linq dbml, it will add a connection string to your config. you just have to make sure that you find this connectionstring and copy it into your active application. eg:
I have a solution that has 2 projects:
1 - website
2 - library
I put the dbml into the library project. If I go and look into the App.config of the library project, I'll see the connectionstring that LINQ wants to use. If I copy this connectionstring into the website's connectionstrings.confing file, when I reference the library and run the website, LINQ will be able to see the connectionstring it wants to use.
You can try Sql Server Merge Replication and use SQL Compact 3.5 as your laptop database and use master as your work/home machine database. However you may do this with only Sql Server Standard Edition.
Other option is , Microsoft Sync Framework.. here..
http://msdn.microsoft.com/en-us/sync/default.aspx
You could use red_gate's SQL COmpare and SQLDataCompare to script out changes to the database. You should be in the habit of scripting database changes anyway as that is what you will need to do when it is time to move changes to prod. I would also make sure all database changes are in SVN, we don't make any changes to the database ever without a script in source control.
I ended up just using multiple connection strings and then manually changing the connection on the dbml file whenever I moved locations. However I also have some code in place to programmatically change it based on the project setting for the location.
I haven't really got a good solution to the transferring of the databases and continue to use the backup and restore method.

Updating database on website from another data store

I have a client who owns a business with a handful of employees. He has a product website that has several hundred static product pages that are updated periodically via FTP.
We want to change this to a data-driven website, but the database (which will be hosted at an ISP) will have to be updated from data on my client's servers.
How best to do this on a shoestring? Can the database be hot-swapped via FTP, or do we need to build a web service we can push changes to?
Ask the ISP about the options. Some ISPs allow you to ftp upload the .mdf (database file).
Some will allow you to connect with SQL management studio.
some will allow both.
you gotta ask the ISP.
Last time I did this we created XML documents that were ftp'd to the website. We had an admin page that would clear out the old data by running some stored procs to truncate tables then import the xml docs to the sql tables.
Since we didn't have the whole server to ourselves, there was no access to SQL Server DTS to schedule this stuff.
There is a Database Publishing Wizard from MS which will take all your data and create a SQL file that can then be run on the ISP. It will also, though I've never tried it, go directly to an ISP database. There is an option button on one of the wizard screens that does it.
it does require the user to have a little training and it's still a manual process so mabe not what you're after but i think it will do the job.
Long-term, building a service to upload the data is probably the cleanest solution as the app can now control it's import procedures. You could go grossly simple with this and just have the local copy dump some sort of XML that the app could read, making it not much harder than uploading the file while still in the automatable category. Having this import procedure would also help with development as you now have an automated and repeatable way to sync data.
This is what I usually do:
You could use a tool like Red-Gate's SQL Data Compere to do this. The tool compares data between two catalogs (on same or different servers) and generates a script for syncing them.

Resources