NAS license other than in database - dynamics-nav

I have a Nav database with license "A" saved in it. License "A" does not allow read of certain tables (Job Queue).
Is it possible to run NAS (navision application server) under another license "B" which allows access to the objects I need?
Simply changing fin.flf in NAS installation folder to license "B" is not working. In this case NAS still using license "A" saved in database.

There are a couple of ways to look at doing this depending on your ability and access to modify code within the system.
One such method is detailed by Kirki that involves changing how the licensing is stored in the database it self; and is a bit of a breaking change in how you manage the license files.
How to use multiple licenses simultaneously in service tier and development (and classic) environment
Normally in a production database you would want to control this with Security on the relevant tables and users as opposed to having multiple license files; which should be much easier to accomplish and does not require any special development.

Related

Best way to manage SQL Server on developer, test, staging, and production environments through Visual Studio 2013

I've read an article on MS Blog and on here on stackoverflow and this article
They do shed some light on my scenario, but I feel I may be missing something...
The third article above nicely explains a possible way to deploy database versions including schema and data... but is oriented to deploying to production.
I am looking to streamline deploying DBProj's to Developer DB instances , test, staging and production.(ALL are SQL2012 Std. Edition)
on the developer instances, they may be a few versions off... we have contractors who leave and it may be a couple of dev cycles before a new contractor tries to deploy.
Also, how do you get the schema on the target to clean up itself? I know we can turn off the restrictions to remove schema objects, but on the developer workstation instances the logins are different that other environments and we do not want those deleted!!! The second article has some clues to this but does not work when I tried it. We have one application role across all environments and depending on the environment the right login is placed in there.
I have a sense I may have to propose changing the our schema, which may not fly well the the other leads.
I would appreciate hearing from anyone who has a tried and true process in place that can cover seamless deployment to the 4 environments described above.
Thanks!
You might be interested in Deployment Manager and SQL Source Control from Red Gate (full disclosure - I work for Red Gate).
The approach these two products use for keeping development environments in sync is:
Developers edit a local database to make their changes (or all edit a shared DB across they whole team)
Developers can then synchronized the database to an existing source control repository (e.g. SVN/Git/TFS) using SQL Source Control
Other team members can update their databases from the repository, and changes are applied to their local database.
Deployment Manager works with a CI server to allow the automated deployment of any version of the database to a set of predefined environments. For example you might want an automated deployment to an integration environment after every commit. Deployments out to test/staging/production environments are then push button deployments when required.
Under the hood it's the Red Gate SQL Compare comparison technology to compare the versioned database state to the target database state. This means that any development database can be updated to the latest state, even if it is much older than the head revision, or a new member joining the team.
You can include filters within the packages/repository which will exclude certain objects (for examples users, roles, keys, specific schemas). This means that you can deploy the same version/package to each environment, and it won't interfere with these objects.
My colleague has just written a great intro blog post with some videos if you're interested in more info.

Database name in Source control

We're developing an aspx project with Visual Studio 2010 Professional, SQL Server 2008 R2 and Team Foundation Server 2010. Since the development is being carried out in multiple offices, each developer has their own local instances of the databases.
I want to bring these multiple databases under source control (or at least the schemas of the DB, structure and stored procedures - data doesn't matter to me). My preferred approach is to add database projects to the VS solution, which is already source controlled in TFS. Any changes will be distributed by TFS, and can be deployed locally.
The problem I'm having is that the database projects contain a reference to a local database instance (server & name). When someone gets the latest version of my changes, they will have a reference to my local DB instance (which is different to their local DB instance). They would need to change the DB details (thus checking the dbproj out) in order to get my updates.
So, is there any way that the database server & name can be left out of source control while the schemas remain under source control? Any help would be much appreciated!
I'm not sure if you can. However, you could use an alias, so all of the developers use a database on their local machine, but referenced by the same alias.
Take a look at: http://www.mssqltips.com/sqlservertip/1620/how-to-setup-and-use-a-sql-server-alias/ for how to set an alias up.
That way you can separate the database from the connection details.
I'm involved in developing a unique enforced database source control solution called DBmaestro TeamWork.
It has a plugin to SSMS which allows the developer to work directly on the database objects (change their working environment), run their tests and then perform Check-In which reads the metadata (tables' structure, procedures, functions, views etc.) to the version control repository.
With the Impact Analysis it is easy to merge changes from different databases to a single database.
The impact analysis algorithm perform 3-way analysis (not just a simple compare & sync) to identify changes origin from developerA which should not be reverted when developer merge his changes and it ignores the database name when running the impact analysis or generating the delta script.

How can I create and access multiple databases in Oracle 11g?

I have bought a Oracle 11g recently and I wanted all my developers to use it. Obviously I can't buy different licenses for each. So is it possible for me to create one database for each of the developers?. By inference I know it is possible.
However, I couldn't find how I can do it. I googled. There was no definite guide for this particular case. Can you point to the right resource?
Or could you list down the steps to achieve this?
I would ever be grateful.
-
Sheldon
When you create a user in Oracle, you're creating a schema. A schema is a collection of tables and related objects (views, functions, stored procedures, etc) specific to that schema. So each developer could have their own user/schema, and work independently of one another. Access to other users can be granted, and public synonyms can be created to ensure that YOUR_TABLE points to a YOUR_TABLE in a specific schema, without the need to specify that schema. But this can eat space...
If there is shared development, might be best to have a single schema so everyone is working on the same copy.
Create one database and give each developer it's own schema (username/password).
As long as all your database instances are on the same server you can build as many as you want without paying any more. Performance might become an issue with more instances depending on how heavily used they are.
You don't mention your platform.
On windows, here's how to use the Database Configuration Assistant (DBCA). I think it's pretty similar on *nix as well.
Each database so created has a different name. To access them it's simply a matter of using a tnsnames.ora file with different entries for each instance on the server.
You can buy Oracle personal edition for each developer and install it on their desktop/laptop. According to shop.oracle.com it's $460 per user. This way you can give everyone full access to Oracle and save a lot of trouble. Developers can learn Oracle more quickly and be more productive, and DBAs won't have to worry about them bringing down the server.
Or possibly you could even use it for free if your program is not in production yet. The Oracle Developer license lets you:
... use the Programs, subject to the restrictions stated in this
Agreement, only for the purpose of developing, testing, prototyping,
and demonstrating Your application and only as long as Your
application has not been used for any data processing, business,
commercial, or production purposes, and not for any other purpose.

What is the State of the Art for deploying database updates to production databases?

Every shop at which I've worked has had their own cobbled-together, haphazard, poorly understood and poorly maintained method for updating production databases.
I've never seen a consistent method for doing this.
So, in the most recent versions of SQL Server, what is the best practice for updating schema changes and migrating data from a development or test server to a production server?
Is there a 3rd party tool which handles this painlessly?
I'd imagine the ultimate tool would be able to
detect schema changes between two DBs and generate DDL to update one to the other.
include the ability to have custom code which performs custom data migration steps
allow versioning so a v1 db could be updated all the way to a v99 database, running all scripts and migration steps in order.
The three things I've used are:
For schemas
Visual Studio Database Projects. Meh. They are okay but you still have to do alot of the work yourself.
Red Gate's SQL Compare and the entire SQL Toolbelt. They've worked pretty hard to make this something you can version control. In practice I've found with databases you are usually trying to get from point A in the version timeline to point B. With binaries, you often just clobber whatever is there with point B (an oversimplification I know, but often true).
http://www.red-gate.com/
xSQL is a good place to start if your system is small and perhaps will remain small:
http://www.xsqlsoftware.com/LiteEdition.aspx
I don't work for or know anyone who works for or get any money from these people. Just telling you what I've done in the past.
For data
Red Gate has SQL Data Compare.
However, if you want something "free" (or included with SQL Server)
I've actually had a lot of success just using BCP and writing a small system that injects and extracts data. Generally when I find myself doing this I ask myself, "Why? If I am changing data, does that mean I am really changing something that is configuration? Can I use a different method here?" But sometimes you can't (maybe it's a legacy system where the original devs thought databases are for everything).
The problem with BCP extracts is they don't version control very well. There are tricks I've used like extracting in character mode and stuffing an order by in the extract query to try and pull rows out in an order that makes them somewhat more palatable for version control.
For small Projects I have used RedGate to manage schema and data migrations with alot of success. Very easy to use works for most cases.
For larger enterprise systems for Schema and data changes normally you save all the SQL scripts as text files and run them. We also include a Rollback script to run incase something goes wrong during the migration. Run this on UAT server then Test/staging/pre prod server then on Production. Saving a copy of all these files plus their roll back scripts should allow you to move from multiple versions of a DB.
There is also http://code.google.com/p/migratordotnet/ if your using .NET it allows you to define these scripts in CODE. Very usesful if you want to deploy across multiple DBs in an automated way. Makes it easy to say set my DB to version 23. Or revert my DB to version 5. etc. Works for schema and data, but I would only really use it for a few lines of data.
First you have to think that the requirements between scenarios vary a lot:
Customers purchase v1 of the product at Costco and install it in they home office or small business. When v2 comes out, customer purchases a box of the product and installs it on a new computer. It exports the data from the v1 installation and imports it into v2 installation. Even though behind the scenes both v1 and v2 use a SQL Express instance there is no supported upgrade. Schema changes on the deployed databases are not expected (hidden database, non technical user) and definitely not supported. The only 'upgrade' path supported is an explicit export/import, which probably uses an XML file or something similar.
A business purchases v1 of the product with a support contract. It installs it on its department SQL Server instance, from where the data is accessed by the purchased product and by many more integration services, reports etc. When v2 is released, the customer runs the prescribed upgrade procedure, if it runs into problems it calls the product vendor customer support line which walks the customer through some specific steps for his deployment. Database schema customizations are expected and often supported, including upgrade scenarios, but the schema changes are done by the customer (not known at v2 design time).
A web startup has database that backs the site. Developers make changes on their personal instances and check in changes. Automated build deployment with contiguous integration picks up the changes and deploys them against a test instance, and run build validation tests. The main branch build can be, at any moment, deployed into production. Production is the one database that backs the site. The structure of the production database is documented and understood 100%, every single change to the production database schema occurs through the build system and QA process. On a side note, this is the scenarios most SO users that ask your question have in mind, minus the part about '100% documented and understood'. I give the example of WWW backing site, but deplyment can really be anything. The gist of it is that there is only one production database (it may include HA/DR copies, and it may consist of multiple actual SQL Server databases), and is the only database that has to be upgraded.
A succesfull web startup. Same as above, but the production database has 5TB of data and 5 minutes of downtime make the CNN headlines. Schema changes may involve setting up replicas and copying data into new schemas with contiguous updates, followed by an online switch of operations to the replica. Schema changes are designed by MCM experts and deployn a schema change can be a multi-week process.
I can go on wit more scenarios. The point is that the requirement of each of these cases are so vastly different, that no 'state of the art' can answer all of them. Some scenarios will be perfectly OK with a schema diff deployment tool like vsdbcmd or SQL Compare. Other scenarios will be much better faced with explicit versioning scripts. Other might have such specific requirements (eg. 0 downtime) that each upgrade is a project on its own and has to be specifically custom tailored.
One thing is clear though across all scenarios: if your shop threats the development database MDF file* as 'source' and makes changes to it using the management tools, that is always a major #fail. All changes should be captured explicitly as some sort of source control artifact, and this is why I favor most the explicit version scripts, as in Version Control and your Database. But I recon that the VSDB project support for compile time schema validation and its ease of refactoring schema objects make a pretty powerful proposition and VSDB schema compare deployment may be OK.
Another important approache that has to be addressed is the code first schema modeling from tools like EF or LinqToSql. It works brilliantly to deploy v1, but fails miserably at any subsequent version. I strongly discourage these approaches.
But to sum up and answer in brief: as today, the state of the art sucks.
At Red Gate we'd recommend one of two approaches depending on your requirements and how formal you need your processes to be. If you have a development database and simply want to push changes to production, SQL Compare is the tool for the job. A level of versioning can be achieved by using the schema snapshots.
However, if you wants full source control benefits, such as team collaboration, sandboxed environments, audit trail, compliance, history, rollback, etc, you should consider SQL Source Control. This links development databases to Team Foundation Server or Subversion.

Managing databases in Open Source Software Projects

I was wondering how databases are managed in open source projects which are usually hosted in repositories like CVS or SVN. Placing codes in the SVN is very logical as it allows different team members to get updated pieces of code but how about databases?
Are their schemas and contents (.sql files I assume) placed inside the SVN too? In this case if I were creating a web application, would this require developers to keep on updating their local databases with the newest .sql file?
Or, is it more like having a central server which members can modify and their software just connects to over the net?
I'm planning to start an open source web application project (which requires the use of a database) but am a bit confused of how to go about the database management part.
Typically you would include one or two things:
Schema creation scripts
Initial data to be loaded into the database
Both of these should be text files. If your data needs any special processing before it can be loaded into the DBMS, then include a tool for that too.
One thing you should not do is include any particular binary database file in your source control. For example, a SQLite database file would not be appropriate. Binary database files are not normally portable across architectures or versions.
In my experience these types of applications usually include a database set up included with the build. Usually you have to install the DB where you need it then install the client. Also, usually these databases are also open source like MySQL or something.
This depends on the project you are doing. For example if all the developers are in same location in one company, the central database server may be applicaple. If the developers are distributed around the world, then the central database server is probably out of the question and every developer creates his own copy of database for development.
I would think that most common option is that every developer uses his own database.
In any case you'll want to keep the schema creation and initial data creation files in version control. This way all the developers can create a new database easily.

Resources